CN111176440B - Video call method and wearable device - Google Patents

Video call method and wearable device Download PDF

Info

Publication number
CN111176440B
CN111176440B CN201911154489.7A CN201911154489A CN111176440B CN 111176440 B CN111176440 B CN 111176440B CN 201911154489 A CN201911154489 A CN 201911154489A CN 111176440 B CN111176440 B CN 111176440B
Authority
CN
China
Prior art keywords
video
target
wearable device
special effect
video object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911154489.7A
Other languages
Chinese (zh)
Other versions
CN111176440A (en
Inventor
王强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201911154489.7A priority Critical patent/CN111176440B/en
Priority to CN202410083247.8A priority patent/CN117908677A/en
Publication of CN111176440A publication Critical patent/CN111176440A/en
Application granted granted Critical
Publication of CN111176440B publication Critical patent/CN111176440B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A video call method and wearable equipment comprise the following steps: in the process of video call between the wearable device and the video object, comprehensively analyzing image information and voice information of the wearing user of the wearable device and the video object to obtain the current video atmosphere type, determining a target special effect model matched with the current video atmosphere from a special effect model library, and controlling a scene special effect corresponding to the target special effect model to be displayed on a display screen of the wearable device so as to achieve the purpose of enhancing or relieving emotion. By implementing the embodiment of the application, the interestingness of the video call can be effectively enhanced, and the occurrence probability of the video call is improved.

Description

Video call method and wearable device
Technical Field
The application relates to the technical field of wearable equipment, in particular to a video call method and wearable equipment.
Background
When a user utilizes the intelligent watch to carry out video call, the display screen of the intelligent watch normally only carries out real-time output on the user side picture and the video object side picture, so that the effect of face-to-face communication is achieved, and along with the continuous development of society, the mode of only displaying the pictures of the two video sides cannot meet the increasingly abundant living demands of people, so that the occurrence probability of the video call is not facilitated to be improved.
Disclosure of Invention
The embodiment of the application discloses a video call method and wearable equipment, which are beneficial to improving the occurrence probability of video call.
The first aspect of the application discloses a video call method, which comprises the following steps:
comprehensively analyzing image information and voice information of a wearing user of the wearable device and a video object in the process of carrying out video call on the wearable device and the video object to obtain a current video atmosphere type;
determining a target special effect model matched with the current video atmosphere from a special effect model library;
and controlling the scene special effect corresponding to the target special effect model to be displayed on a display screen of the wearable equipment so as to achieve the purpose of enhancing or relieving emotion.
As an optional implementation manner, in the first aspect of the embodiment of the present application, in a process that the wearable device performs a video call with a video object, image information and voice information of a wearing user of the wearable device and the video object are comprehensively analyzed, and after obtaining a current video atmosphere type, the method further includes:
determining a target color temperature and a target illuminance of the lighting equipment of the environment where the wearing user is located according to the current video atmosphere type;
And sending a parameter adjustment request carrying the target color temperature and the target illumination to the lighting equipment so as to enable the lighting equipment to adjust the color temperature to be the target color temperature and adjust the illumination to be the target illumination.
As an optional implementation manner, in the first aspect of the embodiment of the present application, after determining, from the special effect model library, the target special effect model matched with the current video atmosphere, the method further includes:
acquiring a history browsing record of the online video platform of the wearing user;
determining a target scene special effect in the scene special effects corresponding to the target special effect model according to the history browsing record;
the controlling the scene special effect corresponding to the target special effect model to be displayed on the display screen of the wearable device comprises the following steps:
and controlling the target scene special effect to be displayed on a display screen of the wearable device.
As an optional implementation manner, in a first aspect of the embodiment of the present application, the controlling the target scene special effect to be displayed on a display screen of the wearable device includes:
detecting instruction information for indicating a display area of the special effect of the target scene;
And controlling the target scene special effect to be displayed on a display screen of the wearable device according to the instruction of the instruction information.
As an optional implementation manner, in the first aspect of the embodiment of the present application, the method further includes:
detecting whether a grouping request for the video object input by the wearing user is received or not when the video call is terminated;
if the grouping request is received, evaluating the intimacy index of the wearing user and the video object according to all scene special effects displayed in the video call process;
setting identification information of a social account of the video object according to the intimacy index;
and determining a target group matched with the identification information from groups contained in the social account number of the wearing user, and adding the social account number of the video object to the target group.
A second aspect of embodiments of the present application discloses a wearable device, comprising:
the analysis unit is used for comprehensively analyzing image information and voice information of a wearing user of the wearable device and the video object in the process of carrying out video call on the wearable device and the video object to obtain the current video atmosphere type;
The determining unit is used for determining a target special effect model matched with the current video atmosphere from a special effect model library;
and the display unit is used for controlling the scene special effects corresponding to the target special effect model to be displayed on the display screen of the wearable device so as to achieve the purpose of enhancing or relieving emotion.
As an optional implementation manner, in the second aspect of the embodiment of the present application, the determining unit is further configured to, in a process that the wearable device performs a video call with the video object, comprehensively analyze image information and voice information of a wearing user of the wearable device and the video object, obtain a current video atmosphere type, and then determine, according to the current video atmosphere type, a target color temperature and a target illuminance of a lighting device in an environment where the wearing user is located;
the wearable device further comprises:
and a transmitting unit for transmitting a parameter adjustment request carrying the target color temperature and the target illuminance to the lighting equipment so as to enable the lighting equipment to adjust the color temperature to the target color temperature and adjust the illuminance to the target illuminance.
As an optional implementation manner, in a second aspect of the embodiment of the present application, the wearable device further includes:
The acquisition unit is used for acquiring the history browsing record of the online video platform of the wearing user after the determination unit determines the target special effect model matched with the current video atmosphere from the special effect model library; determining a target scene special effect in the scene special effects corresponding to the target special effect model according to the history browsing record;
the display unit is used for controlling the display mode of the scene special effect corresponding to the target special effect model on the display screen of the wearable device to be specifically as follows:
and the display unit is used for controlling the target scene special effect to be displayed on a display screen of the wearable device.
As an optional implementation manner, in the second aspect of the embodiment of the present application, the manner in which the display unit is configured to control the target scene special effect to be displayed on the display screen of the wearable device is specifically:
the display unit is used for detecting instruction information of a display area for indicating the special effect of the target scene; and controlling the target scene special effect to be displayed on a display screen of the wearable device according to the instruction of the instruction information.
As an optional implementation manner, in a second aspect of the embodiment of the present application, the wearable device further includes:
The detection unit is used for detecting whether a grouping request for the video object input by the wearing user is received or not when the video call is terminated;
the evaluation unit is used for evaluating the intimacy index of the wearing user and the video object according to all scene special effects displayed in the video call process when the grouping request is received; setting identification information of a social account number of the video object according to the intimacy index;
and the grouping unit is used for determining a target grouping matched with the identification information from the grouping contained in the social account number of the wearing user, and adding the social account number of the video object to the target grouping.
A third aspect of an embodiment of the present application discloses a wearable device, comprising:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to execute the steps of the video call method disclosed in the first aspect of the embodiment of the present application.
A fourth aspect of the embodiments of the present application discloses a computer-readable storage medium, on which computer instructions are stored, which when executed cause a computer to perform the steps of the video call method disclosed in the first aspect of the embodiments of the present application.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
in the embodiment of the application, in the process of carrying out video call between the wearable device and the video object, the image information and the voice information of the wearing user of the wearable device and the video object are comprehensively analyzed to obtain the current video atmosphere type, a target special effect model matched with the current video atmosphere is determined from a special effect model library, and the scene special effect corresponding to the target special effect model is controlled to be displayed on a display screen of the wearable device, so that the purpose of enhancing or relieving emotion is achieved. Therefore, by implementing the embodiment of the application, in the video call process of the user, different scene special effects are added according to the video atmosphere change in the video call process, so that the interestingness of the video call can be effectively enhanced, and the occurrence probability of the video call is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a video call method disclosed in an embodiment of the present application;
FIG. 2 is a flow chart of another video call method disclosed in an embodiment of the present application;
FIG. 3 is a flow chart of another video call method disclosed in an embodiment of the present application;
FIG. 4 is a modular schematic diagram of a wearable device disclosed in an embodiment of the present application;
FIG. 5 is a modular schematic diagram of another wearable device disclosed in an embodiment of the present application;
FIG. 6 is a modular schematic diagram of yet another wearable device disclosed in an embodiment of the present application;
fig. 7 is a modular schematic diagram of yet another wearable device disclosed in an embodiment of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be noted that the terms "comprises" and "comprising," along with any variations thereof, in the embodiments of the present application are intended to cover non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed.
The embodiment of the application discloses a video call method and wearable equipment, which are beneficial to improving the occurrence probability of video call. The following detailed description is made with reference to the accompanying drawings.
Example 1
Referring to fig. 1, fig. 1 is a flowchart of a video call method disclosed in an embodiment of the present application, and as shown in fig. 1, the video call method may include the following steps:
101. in the process of carrying out video call between the wearable device and the video object, comprehensively analyzing image information and voice information of a wearing user of the wearable device and the video object to obtain the current video atmosphere type.
Optionally, before comprehensively analyzing the image information and the voice information of the wearing user and the video object of the wearable device to obtain the current video atmosphere type, the following steps may be further performed:
judging whether the arm swinging action input by a wearing user of the wearable equipment is received or not;
if the arm swinging action is received, the direction and the force of the arm swinging action are obtained;
judging whether the direction of the swing arm action is a preset direction or not, and judging whether the force of the swing arm action is greater than a preset force or not;
when the direction of the arm swing action is a preset direction and the force of the arm swing action is a preset force, controlling to start a scene special effect mode, and continuously executing the comprehensive analysis of the image information and the voice information of the wearing user and the video object of the wearable equipment to obtain the current video atmosphere type.
By means of the method, efficient starting of the scene special effect mode of the video call can be achieved through detecting the arm swinging action of the wearing user of the wearable device.
It should be noted that, in the embodiment of the present application, the starting of the scene special effect mode of the video call of the wearable device may be implemented by pressing a virtual button of the video call interface in addition to the arm swing of the wearing user of the wearable device, which is not limited in the embodiment of the present application.
In this embodiment of the present application, the preset video atmosphere of the wearable device may include tension, mind, cheering, and the like, each video atmosphere type is respectively associated with a plurality of facial features and a plurality of keyword, based on the description, in a process of video call between the wearable device and the video object, comprehensively analyzing image information and voice information of a wearing user of the wearable device and the video object, and obtaining the current video atmosphere type may include:
in the process of carrying out video call between the wearable device and the video object, analyzing image information of a wearing user of the wearable device and the video object to obtain facial features;
analyzing voice information of a wearing user and a video object of the wearable equipment, and extracting words for indicating emotion;
And determining the current video atmosphere type from the preset video atmosphere of the wearable equipment according to the facial feature information and the words for indicating the emotion.
Optionally, in the embodiment of the present application, after determining the current video atmosphere type from the video atmospheres preset by the wearable device according to the facial feature information and the words for indicating emotion, the following steps may be further performed:
collecting physiological parameters of a wearing user of the wearable device; wherein the physiological parameters of the wearing user of the wearable device at least comprise blood glucose, blood pressure, body temperature and respiratory frequency;
judging whether the physiological parameters of the wearing user are in the parameter range corresponding to the current video atmosphere type;
if the video atmosphere type is within the parameter range corresponding to the current video atmosphere type, the step 102 is continued.
By implementing the method, the current video atmosphere type is comprehensively determined according to the facial features, words for indicating emotion and the physiological parameters of the wearing user of the wearable device, so that the determination accuracy of the current video atmosphere type is improved.
102. And determining a target special effect model matched with the current video atmosphere from the special effect model library.
103. The scene special effect corresponding to the control target special effect model is displayed on a display screen of the wearable device, so that the purpose of enhancing or relieving emotion is achieved.
The special effect model library stores the special effect model corresponding to each video atmosphere preset by the wearable equipment and the scene special effect corresponding to the special effect model, by implementing the method, the video atmosphere change in the video call process is analyzed in real time, different scene special effects are added, the interestingness of the video call can be effectively enhanced, the occurrence probability of the video call is improved, the determination precision of the current video atmosphere type can be improved, and the efficient starting of the scene special effect mode of the video call can be realized.
Example two
Referring to fig. 2, fig. 2 is a flowchart of another video call method disclosed in an embodiment of the present application, where the video call method shown in fig. 2 may include the following steps:
201. in the process of carrying out video call between the wearable device and the video object, comprehensively analyzing image information and voice information of a wearing user of the wearable device and the video object to obtain the current video atmosphere type.
202. And determining the target color temperature and the target illuminance of the lighting equipment of the environment where the wearable equipment is worn by the user according to the current video atmosphere type.
203. And sending a parameter adjustment request carrying the target color temperature and the target illuminance to the lighting equipment of the environment where the wearing user of the wearable equipment is located, so that the lighting equipment of the environment where the wearing user of the wearable equipment is located adjusts the color temperature of the lighting equipment to the target color temperature, and adjusts the illuminance of the lighting equipment to the target illuminance.
Step 202 to step 203 are executed, and according to the current video atmosphere type, the illuminance and the color temperature of the lighting device in the environment where the wearing user is located are adjusted, so that the emotion can be enhanced or relieved in an auxiliary manner by means of the lighting device in the environment where the wearing user is located.
204. And determining a target special effect model matched with the current video atmosphere from the special effect model library.
For a detailed description of step 201 and step 204, please refer to the description of step 101 and step 102 in the first embodiment, and the description of this embodiment is omitted.
205. And acquiring a history browsing record of the wearable device on-line video platform of the wearing user.
206. And determining the target scene special effect in the scene special effect corresponding to the target special effect model according to the history browsing record.
As an alternative implementation, in the embodiment of the present application, after step 206, the following steps may be further performed:
acquiring physiological parameters of a video object; wherein the physiological parameters of the video object at least comprise blood glucose, blood pressure, body temperature and respiratory rate;
judging whether the physiological parameters of the video object are in the parameter range corresponding to the current video atmosphere type;
if the video atmosphere type is in the parameter range corresponding to the current video atmosphere type, detecting whether a sending instruction aiming at the special effect of the target scene is received or not;
If the sending instruction is received, packaging the special effect of the target scene to obtain a file package;
and sending the file package to the video object so that the target scene special effect is displayed on a video call interface of the video object.
By implementing the method, the emotion of the video object can be improved by means of the special effect of the target scene.
207. The control target scene special effect is displayed on a display screen of the wearable device.
The following details the steps 206 to 207 by way of example:
assuming that the wearing user of the wearable device is a child, the current video atmosphere type obtained in step 201 is a cheerful type, the target special effect model in step 204 is a special effect model indicating cheerful in a special effect model library, wherein the target special effect model comprises a plurality of scene special effects corresponding to themes, the wearing user online video platform mentioned in step 205 is an animation video platform, a history browsing record of the wearing user online video platform is an animation watching record of the child in a preset time period, the duration of the preset time period can be one week, one month or one quarter, a termination time node of the preset time period can be a current time point, and based on the description, the target scene special effect is determined in the scene special effects corresponding to the target special effect model according to the history browsing record, and the method comprises the following steps: acquiring the animation video with highest watching frequency according to the history browsing record; determining an animation theme according to the name of the animation video with the highest watching frequency; determining a target theme which is matched with the animation theme from a plurality of themes corresponding to the target special effect model; and taking the scene special effect corresponding to the target theme as the target scene special effect. According to the method, the target scene special effect is determined according to the animation watching record in the preset time period of the child, so that the target scene special effect is more fit with the interest of the child, and the interest of the video call can be further improved.
As an optional implementation manner, in the embodiment of the present application, the displaying of the control target scene special effect on the display screen of the wearable device may include: detecting instruction information for indicating a display area of a special effect of a target scene; and according to the instruction of the instruction information, controlling the target scene special effect to be displayed on a display screen of the wearable device. By implementing the method, flexible control of the display area of the special effect of the target scene can be realized.
By implementing the method, the video atmosphere change in the video call process is analyzed in real time, different scene special effects are added, the interestingness of the video call can be effectively enhanced, the occurrence probability of the video call can be improved, the determination precision of the current video atmosphere type can be improved, the efficient starting of the scene special effect mode of the video call can be realized, the emotion can be enhanced or relieved in an auxiliary manner by means of the lighting equipment wearing the environment where the user is located, the target scene special effect can be more fit with the interest of children, the flexible management and control of the display area of the target scene special effect can be realized, and the emotion of a video object can be improved by means of the target scene special effect.
Example III
Referring to fig. 3, fig. 3 is a flowchart of another video call method disclosed in an embodiment of the present application, where the video call method shown in fig. 3 may include the following steps:
For a detailed description of steps 301 to 307, please refer to the description of steps 201 to 207 in the second embodiment, and the description of this embodiment is omitted.
308. When the video call is terminated, detecting whether a grouping request for the video object input by a wearing user of the wearable device is received, and if so, executing steps 309 to 311; if not, the process is ended.
309. And evaluating the intimacy index of the wearing user of the wearable equipment and the video object according to all scene special effects displayed in the video call process.
310. And setting identification information of the social account number of the video object according to the intimacy index.
311. And determining a target group matched with the identification information from groups contained in the social account numbers of the wearing users of the wearable equipment, and adding the social account numbers of the video objects to the target group.
Steps 308 to 311 are executed, and automatic grouping of video objects is achieved according to all scene special effects displayed in the video call process, so that social account numbers of wearing users of the wearable device can be managed.
It should be noted that, in the embodiment of the present application, the grouping for the video objects may be determined according to all the scene special effects displayed in the video call process, or may be determined by manual selection of the wearing user of the wearable device, or may be determined according to grouping information sent by the terminal device associated with the wearable device, which is not limited in the embodiment of the present application.
The following detailed description is made regarding the grouping of the video objects according to the grouping information sent by the terminal device associated with the wearable device:
in this embodiment of the present application, if the wearing user of the wearable device is a young child, the terminal device associated with the wearable device is a parent end, the following steps may be further executed:
when the social account number of the wearing user receives an account number adding request, acquiring information of a request object;
transmitting the data information to a parent end associated with the wearable equipment;
when an adding instruction fed back by a parent end is received, judging whether the adding instruction carries grouping information aiming at a request object or not;
if so, adding and grouping the request objects according to the instruction of the grouping information.
When the wearing user of the wearable device is an underage child, parents can monitor the social range of the underage child in real time so as to avoid malicious social injuries to the underage child.
By implementing the method, video atmosphere changes in the video call process are analyzed in real time, different scene special effects are added, interestingness of the video call can be effectively enhanced, occurrence probability of the video call can be improved, determination accuracy of the current video atmosphere type can be improved, efficient starting of a scene special effect mode of the video call can be realized, emotion can be enhanced or relieved in an auxiliary mode by means of lighting equipment wearing an environment where a user is located, a target scene special effect can be attached to interest of children more, flexible management and control of a display area of the target scene special effect can be realized, emotion of a video object can be improved by means of the target scene special effect, social account numbers of wearing users of wearable equipment can be managed, and malicious social injuries to low-age children can be avoided.
Example IV
Referring to fig. 4, fig. 4 is a schematic diagram of a wearable device according to an embodiment of the present disclosure. As shown in fig. 4, the wearable device may include:
the analysis unit 401 is configured to comprehensively analyze image information and voice information of a wearing user of the wearable device and a video object in a process of performing video call between the wearable device and the video object, so as to obtain a current video atmosphere type.
Optionally, the analyzing unit 401 is further configured to comprehensively analyze image information and voice information of a wearing user of the wearable device and a video object, and determine whether an arm swinging action input by the wearing user of the wearable device is received before obtaining the current video atmosphere type; if the arm swinging action is received, the direction and the force of the arm swinging action are obtained; judging whether the direction of the swing arm action is a preset direction or not, and judging whether the force of the swing arm action is greater than a preset force or not; when the direction of the arm swing action is a preset direction and the force of the arm swing action is a preset force, controlling to start a scene special effect mode, and triggering and executing the comprehensive analysis of the image information and the voice information of the wearing user and the video object of the wearable device to obtain the current video atmosphere type. According to the method, the efficient starting of the scene special effect mode of the video call is achieved by detecting the arm swinging action of the wearing user of the wearable device.
It should be noted that, in the embodiment of the present application, the starting of the scene special effect mode of the video call of the wearable device may be implemented by pressing a virtual button of the video call interface in addition to the arm swing of the wearing user of the wearable device, which is not limited in the embodiment of the present application.
In this embodiment of the present application, the preset video atmosphere of the wearable device may include tension, mind, cheerful, and the like, and each video atmosphere type is associated with a plurality of facial features and a plurality of keyword respectively, and based on the description, the analysis unit 401 is configured to comprehensively analyze, in a process of video communication between the wearable device and the video object, image information and voice information of a wearing user of the wearable device and the video object, so as to obtain a current video atmosphere type in a specific manner: an analysis unit 401, configured to analyze image information of a wearing user of the wearable device and a video object to obtain facial features during a video call between the wearable device and the video object; analyzing voice information of a wearing user of the wearable device and the video object, and extracting words for indicating emotion; and determining the current video atmosphere type from the preset video atmospheres of the wearable equipment according to the facial features and words for indicating emotion.
Optionally, in the embodiment of the present application, the analysis unit 401 is further configured to collect physiological parameters of a wearing user of the wearable device after determining a current video atmosphere type from a video atmosphere preset by the wearable device according to facial features and words for indicating emotion; wherein the physiological parameters of the wearing user of the wearable device at least comprise blood glucose, blood pressure, body temperature and respiratory frequency; judging whether the physiological parameters of the wearing user are in the parameter range corresponding to the current video atmosphere type; and the physiological parameter of the wearing user is in the parameter range corresponding to the current video atmosphere type, and the trigger determining unit 402 executes the following operation of determining the target special effect model matched with the current video atmosphere from the special effect model library. According to the implementation mode, the current video atmosphere type is comprehensively determined according to the facial features, words for indicating emotion and physiological parameters of the wearing user of the wearable device, and the determination accuracy of the current video atmosphere type is improved.
And the determining unit 402 is used for determining a target special effect model matched with the current video atmosphere from the special effect model library.
The display unit 403 is configured to control the scene special effects corresponding to the target special effect model to be displayed on the display screen of the wearable device, so as to achieve the purpose of enhancing or alleviating emotion.
By implementing the wearable device, the video atmosphere change in the video call process is analyzed in real time, different scene special effects are added, the interestingness of the video call can be effectively enhanced, the occurrence probability of the video call can be improved, the determination precision of the current video atmosphere type can be improved, and the efficient starting of the scene special effect mode of the video call can be realized.
Example five
Referring to fig. 5, fig. 5 is a schematic diagram of another wearable device according to an embodiment of the present application. The wearable device shown in fig. 5 is optimized by the wearable device shown in fig. 4, and in the wearable device shown in fig. 5, the determining unit 402 may be further configured to comprehensively analyze image information and voice information of a wearing user of the wearable device and a video object in a process of performing a video call between the wearable device and the video object by using the analyzing unit 401, and determine, according to the current video atmosphere type, a target color temperature and a target illuminance of an illumination device of an environment where the wearing user of the wearable device is located after obtaining the current video atmosphere type.
The wearable device may further include:
and a transmitting unit 404 for transmitting a parameter adjustment request carrying the target color temperature and the target illuminance to the lighting device in the environment where the wearing user is located, so that the lighting device adjusts the color temperature to the target color temperature and adjusts the illuminance to the target illuminance.
Optionally, in an embodiment of the present application, the wearable device may further include:
an obtaining unit 405, configured to obtain, by using the determining unit 402, a history browsing record of an online video platform of a wearing user of the wearable device after determining, from the special effect model library, a target special effect model that matches a current video atmosphere; and determining the target scene special effect in the scene special effect corresponding to the target special effect model according to the history browsing record.
The manner in which the display unit 403 is configured to control the scene effect corresponding to the target effect model to be displayed on the display screen of the wearable device is specifically:
and a display unit 403, configured to control the target scene special effects to be displayed on a display screen of the wearable device.
Further optionally, the manner in which the display unit 403 is configured to control the target scene special effects to be displayed on the display screen of the wearable device is specifically: a display unit 403 for detecting instruction information for indicating a display area of a special effect of a target scene; and according to the instruction of the instruction information, controlling the target scene special effect to be displayed on the display screen of the wearable device. By implementing the method, the display area of the special effect of the target scene can be flexibly controlled.
As an optional implementation manner, in the embodiment of the present application, the obtaining unit 405 is further configured to obtain, according to the history browsing record, a physiological parameter of the video object after determining a target scene special effect in the scene special effects corresponding to the target special effect model; wherein the physiological parameters of the video object at least comprise blood glucose, blood pressure, body temperature and respiratory rate; judging whether the physiological parameters of the video object are in the parameter range corresponding to the current video atmosphere type; if the video atmosphere type is in the parameter range corresponding to the current video atmosphere type, detecting whether a sending instruction aiming at the special effect of the target scene is received or not; if the sending instruction is received, packaging the special effect of the target scene to obtain a file package; and sending the file package to the video object so that the target scene special effect is displayed on a video call interface of the video object. In this embodiment, the emotion of the video object can also be improved by means of the target scene special effects.
In this embodiment of the present application, for the description of the obtaining unit 405, please refer to an example in the second embodiment, which is not described herein again, based on the description, a manner in which the obtaining unit 405 is configured to determine, according to the history browsing record, a target scene special effect from the scene special effects corresponding to the target special effect model is specifically:
an obtaining unit 405, configured to obtain an animation video with the highest viewing frequency according to the history browsing record; determining an animation theme according to the name of the animation video with the highest watching frequency; determining a target theme which is matched with the animation theme from a plurality of themes corresponding to the target special effect model; and taking the scene special effect corresponding to the target theme as the target scene special effect. According to the method, the target scene special effect is determined according to the animation watching record in the preset time period of the child, so that the target scene special effect is more fit with the interest of the child, and the interest of the video call can be further improved.
By implementing the wearable device, video atmosphere changes in the video call process are analyzed in real time, different scene special effects are added, the interestingness of the video call can be effectively enhanced, the occurrence probability of the video call is improved, the determination precision of the current video atmosphere type can be improved, the efficient starting of the scene special effect mode of the video call can be realized, the emotion can be enhanced or relieved in an auxiliary manner by means of the lighting device wearing the environment where the user is located, the target scene special effect can be attached to the interest of children more, the flexible management and control of the display area of the target scene special effect can be realized, and the emotion of a video object can be improved by means of the target scene special effect.
Example six
Referring to fig. 6, fig. 6 is a schematic diagram of a wearable device according to an embodiment of the present disclosure. Wherein the wearable device shown in fig. 6 is optimized by the wearable device shown in fig. 5, the wearable device shown in fig. 6 may further include:
and the detecting unit 406 is configured to detect whether a packet request for the video object input by the wearing user of the wearable device is received when the video call is terminated.
An evaluation unit 407, configured to evaluate, when the grouping request is received, a intimacy index of a wearing user of the wearable device and the video object according to all scene special effects displayed in the video call process; and setting identification information of the social account number of the video object according to the intimacy index.
The grouping unit 408 is configured to determine a target group that matches the identification information from groups included in a social account number of a wearing user of the wearable device, and add the social account number of the video object to the target group.
The grouping unit 408 automatically groups the video objects according to all the scene special effects displayed in the video call process, and is helpful for managing the social account numbers of the wearing users of the wearable device.
It should be noted that, in the embodiment of the present application, the grouping for the video objects may be determined according to all the scene special effects displayed in the video call process, or may be determined by manual selection of the wearing user of the wearable device, or may be determined according to grouping information sent by the terminal device associated with the wearable device, which is not limited in the embodiment of the present application.
The following detailed description is made regarding the grouping of the video objects according to the grouping information sent by the terminal device associated with the wearable device:
in this embodiment of the present application, if the wearing user of the wearable device is a young child, the terminal device associated with the wearable device is a parent end, and the grouping unit 408 is further configured to obtain information of the request object when the social account number of the wearing user receives the account number addition request; transmitting the data information to a parent end associated with the wearable equipment; when an adding instruction fed back by a parent end is received, judging whether the adding instruction carries grouping information aiming at a request object or not; if so, the request objects are grouped according to the instruction of the grouping information. When the wearing user of the wearable device is an underage child, parents can monitor the social range of the underage child in real time so as to avoid malicious social injuries to the underage child.
By implementing the wearable device, video atmosphere changes in the video call process are analyzed in real time, different scene special effects are added, interestingness of the video call can be effectively enhanced, occurrence probability of the video call can be improved, determination accuracy of the current video atmosphere type can be improved, efficient starting of a scene special effect mode of the video call can be realized, emotion can be enhanced or relieved in an auxiliary mode by means of lighting equipment wearing an environment where a user is located, a target scene special effect can be attached to the interests of children, flexible management and control of a display area of the target scene special effect can be realized, emotion of a video object can be improved by means of the target scene special effect, social account numbers of wearing users of the wearable device can be managed, and malicious social injuries to low-age children can be avoided.
Referring to fig. 7, fig. 7 is a schematic diagram of a wearable device according to an embodiment of the present disclosure. As shown in fig. 7, the wearable device may include:
memory 701 storing executable program code
A processor 702 coupled to the memory;
the processor 702 invokes executable program code stored in the memory 701 to perform the steps of the video call method described in any of fig. 1 to 3.
It should be noted that, in this embodiment of the present application, the wearable device shown in fig. 7 may further include components that are not shown, such as a speaker module, a light projection module, a battery module, a wireless communication module (such as a mobile communication module, a WIFI module, a bluetooth module, etc.), a sensor module (such as a proximity sensor, etc.), an input module (such as a microphone, a key), and a user interface module (such as a charging interface, an external power supply interface, a card slot, a wired earphone interface, etc.).
Embodiments of the present application disclose a computer readable storage medium having stored thereon computer instructions that, when executed, cause a computer to perform the steps of the video call method described in any of fig. 1-3.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the above embodiments may be implemented by a program that instructs associated hardware, the program may be stored in a computer readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disk Memory, magnetic disk Memory, tape Memory, or any other medium that can be used for carrying or storing data that is readable by a computer.
The foregoing describes in detail a video call method and a wearable device disclosed in the embodiments of the present application, and specific examples are applied to describe the principles and implementations of the present application, where the descriptions of the foregoing embodiments are only used to help understand the method and core ideas of the present application; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (8)

1. A video call method, comprising:
comprehensively analyzing image information and voice information of a wearing user of the wearable device and a video object in the process of carrying out video call on the wearable device and the video object to obtain a current video atmosphere type;
determining a target special effect model matched with the current video atmosphere from a special effect model library;
acquiring a history browsing record of the online video platform of the wearing user;
determining a target scene special effect in the scene special effects corresponding to the target special effect model according to the history browsing record;
acquiring physiological parameters of the video object; wherein the physiological parameters of the video object at least comprise blood glucose, blood pressure, body temperature and respiratory frequency;
Judging whether the physiological parameters of the video object are in the parameter range corresponding to the current video atmosphere type;
if the video atmosphere type is in the parameter range corresponding to the current video atmosphere type, detecting whether a sending instruction aiming at the special effect of the target scene is received or not;
if the sending instruction is received, packaging the target scene special effect to obtain a file package;
the file package is sent to the video object, so that the target scene special effect is displayed on a video call interface of the video object;
and controlling the target scene special effect to be displayed on a display screen of the wearable equipment so as to achieve the purpose of enhancing or relieving emotion.
2. The method according to claim 1, wherein, in the process of the wearable device performing a video call with a video object, the method further comprises, after comprehensively analyzing image information and voice information of a wearing user of the wearable device and the video object to obtain a current video atmosphere type:
determining a target color temperature and a target illuminance of the lighting equipment of the environment where the wearing user is located according to the current video atmosphere type;
and sending a parameter adjustment request carrying the target color temperature and the target illumination to the lighting equipment so as to enable the lighting equipment to adjust the color temperature to be the target color temperature and adjust the illumination to be the target illumination.
3. The method of claim 1, wherein the controlling the target scene effect to be displayed on a display screen of the wearable device comprises:
detecting instruction information for indicating a display area of the special effect of the target scene;
and controlling the target scene special effect to be displayed on a display screen of the wearable device according to the instruction of the instruction information.
4. The method according to claim 1, wherein the method further comprises:
detecting whether a grouping request for the video object input by the wearing user is received or not when the video call is terminated;
if the grouping request is received, evaluating the intimacy index of the wearing user and the video object according to all scene special effects displayed in the video call process;
setting identification information of a social account of the video object according to the intimacy index;
and determining a target group matched with the identification information from groups contained in the social account number of the wearing user, and adding the social account number of the video object to the target group.
5. A wearable device, comprising:
the analysis unit is used for comprehensively analyzing image information and voice information of a wearing user of the wearable device and the video object in the process of carrying out video call on the wearable device and the video object to obtain the current video atmosphere type;
The determining unit is used for determining a target special effect model matched with the current video atmosphere from a special effect model library;
the acquisition unit is used for acquiring the history browsing record of the online video platform of the wearing user; determining a target scene special effect in the scene special effects corresponding to the target special effect model according to the history browsing record;
the acquisition unit is also used for acquiring the physiological parameters of the video object; wherein the physiological parameters of the video object at least comprise blood glucose, blood pressure, body temperature and respiratory frequency; judging whether the physiological parameters of the video object are in the parameter range corresponding to the current video atmosphere type; if the video atmosphere type is in the parameter range corresponding to the current video atmosphere type, detecting whether a sending instruction aiming at the special effect of the target scene is received or not; if the sending instruction is received, packaging the target scene special effect to obtain a file package; the file package is sent to the video object, so that the target scene special effect is displayed on a video call interface of the video object;
and the display unit is used for controlling the target scene special effect to be displayed on a display screen of the wearable equipment so as to achieve the purpose of enhancing or relieving emotion.
6. The wearable device according to claim 5, wherein the determining unit is further configured to, in a process that the wearable device performs a video call with a video object, comprehensively analyze image information and voice information of a wearing user of the wearable device and the video object, and determine, according to a current video atmosphere type, a target color temperature and a target illuminance of a lighting device of an environment where the wearing user is located after obtaining the current video atmosphere type;
the wearable device further comprises:
and a transmitting unit for transmitting a parameter adjustment request carrying the target color temperature and the target illuminance to the lighting equipment so as to enable the lighting equipment to adjust the color temperature to the target color temperature and adjust the illuminance to the target illuminance.
7. The wearable device according to claim 5, wherein the manner in which the display unit is configured to control the target scene effect to be displayed on the display screen of the wearable device is specifically:
the display unit is used for detecting instruction information of a display area for indicating the special effect of the target scene; and controlling the target scene special effect to be displayed on a display screen of the wearable device according to the instruction of the instruction information.
8. The wearable device of claim 5, wherein the wearable device further comprises:
the detection unit is used for detecting whether a grouping request for the video object input by the wearing user is received or not when the video call is terminated;
the evaluation unit is used for evaluating the intimacy index of the wearing user and the video object according to all scene special effects displayed in the video call process when the grouping request is received; setting identification information of a social account number of the video object according to the intimacy index;
and the grouping unit is used for determining a target grouping matched with the identification information from the grouping contained in the social account number of the wearing user, and adding the social account number of the video object to the target grouping.
CN201911154489.7A 2019-11-22 2019-11-22 Video call method and wearable device Active CN111176440B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911154489.7A CN111176440B (en) 2019-11-22 2019-11-22 Video call method and wearable device
CN202410083247.8A CN117908677A (en) 2019-11-22 2019-11-22 Video call method and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911154489.7A CN111176440B (en) 2019-11-22 2019-11-22 Video call method and wearable device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202410083247.8A Division CN117908677A (en) 2019-11-22 2019-11-22 Video call method and wearable device

Publications (2)

Publication Number Publication Date
CN111176440A CN111176440A (en) 2020-05-19
CN111176440B true CN111176440B (en) 2024-03-19

Family

ID=70655380

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202410083247.8A Pending CN117908677A (en) 2019-11-22 2019-11-22 Video call method and wearable device
CN201911154489.7A Active CN111176440B (en) 2019-11-22 2019-11-22 Video call method and wearable device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202410083247.8A Pending CN117908677A (en) 2019-11-22 2019-11-22 Video call method and wearable device

Country Status (1)

Country Link
CN (2) CN117908677A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112327720B (en) * 2020-11-20 2022-09-20 北京瞰瞰智域科技有限公司 Atmosphere management method and system
CN112565913B (en) * 2020-11-30 2023-06-20 维沃移动通信有限公司 Video call method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102082870A (en) * 2010-12-28 2011-06-01 东莞宇龙通信科技有限公司 Method and device for managing contacts as well as mobile terminal
CN104703043A (en) * 2015-03-26 2015-06-10 努比亚技术有限公司 Video special effect adding method and device
CN108052670A (en) * 2017-12-29 2018-05-18 北京奇虎科技有限公司 A kind of recommendation method and device of camera special effect
CN108401129A (en) * 2018-03-22 2018-08-14 广东小天才科技有限公司 Video call method, device, terminal based on Wearable and storage medium
CN108882454A (en) * 2018-07-20 2018-11-23 佛山科学技术学院 A kind of intelligent sound identification interaction means of illumination and system based on emotion judgment
CN109933666A (en) * 2019-03-18 2019-06-25 西安电子科技大学 A kind of good friend's automatic classification method, device, computer equipment and storage medium
CN109996026A (en) * 2019-04-23 2019-07-09 广东小天才科技有限公司 Special video effect interactive approach, device, equipment and medium based on wearable device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102082870A (en) * 2010-12-28 2011-06-01 东莞宇龙通信科技有限公司 Method and device for managing contacts as well as mobile terminal
CN104703043A (en) * 2015-03-26 2015-06-10 努比亚技术有限公司 Video special effect adding method and device
CN108052670A (en) * 2017-12-29 2018-05-18 北京奇虎科技有限公司 A kind of recommendation method and device of camera special effect
CN108401129A (en) * 2018-03-22 2018-08-14 广东小天才科技有限公司 Video call method, device, terminal based on Wearable and storage medium
CN108882454A (en) * 2018-07-20 2018-11-23 佛山科学技术学院 A kind of intelligent sound identification interaction means of illumination and system based on emotion judgment
CN109933666A (en) * 2019-03-18 2019-06-25 西安电子科技大学 A kind of good friend's automatic classification method, device, computer equipment and storage medium
CN109996026A (en) * 2019-04-23 2019-07-09 广东小天才科技有限公司 Special video effect interactive approach, device, equipment and medium based on wearable device

Also Published As

Publication number Publication date
CN117908677A (en) 2024-04-19
CN111176440A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN109040824B (en) Video processing method and device, electronic equipment and readable storage medium
CN108363706B (en) Method and device for man-machine dialogue interaction
US20210133459A1 (en) Video recording method and apparatus, device, and readable storage medium
US20220080261A1 (en) Recommendation Method Based on Exercise Status of User and Electronic Device
EP3051463B1 (en) Image processing method and electronic device for supporting the same
CN105845124B (en) Audio processing method and device
CN109982124A (en) User's scene intelligent analysis method, device and storage medium
CN105933539B (en) audio playing control method and device and terminal
CN108848313B (en) Multi-person photographing method, terminal and storage medium
CN108833262B (en) Session processing method, device, terminal and storage medium
CN107925799A (en) Method and apparatus for generating video content
CN111176440B (en) Video call method and wearable device
WO2020135334A1 (en) Television application theme switching method, television, readable storage medium, and device
CN107948672A (en) Preserve the method and system and server, Wearable of video data
WO2017049485A1 (en) Information processing method and smart wristband
CN115278139A (en) Video processing method and device, electronic equipment and storage medium
US10810439B2 (en) Video identification method and device
CN115702993B (en) Rope skipping state detection method and electronic equipment
CN109525791A (en) Information recording method and terminal
CN111951787A (en) Voice output method, device, storage medium and electronic equipment
CN111698532B (en) Bullet screen information processing method and device
CN112446243A (en) Electronic device and emotion-based content pushing method
CN110196900A (en) Exchange method and device for terminal
CN109376252A (en) Topic fights method and device
CN109963180A (en) Video information statistical method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant