CN110784676A - Data processing method, terminal device and computer readable storage medium - Google Patents

Data processing method, terminal device and computer readable storage medium Download PDF

Info

Publication number
CN110784676A
CN110784676A CN201911035248.0A CN201911035248A CN110784676A CN 110784676 A CN110784676 A CN 110784676A CN 201911035248 A CN201911035248 A CN 201911035248A CN 110784676 A CN110784676 A CN 110784676A
Authority
CN
China
Prior art keywords
data
user
terminal
processing
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911035248.0A
Other languages
Chinese (zh)
Other versions
CN110784676B (en
Inventor
肖明
李凌志
陆伟峰
朱荣昌
朱明�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Transsion Holdings Co Ltd
Original Assignee
Shenzhen Transsion Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Transsion Holdings Co Ltd filed Critical Shenzhen Transsion Holdings Co Ltd
Priority to CN201911035248.0A priority Critical patent/CN110784676B/en
Publication of CN110784676A publication Critical patent/CN110784676A/en
Application granted granted Critical
Publication of CN110784676B publication Critical patent/CN110784676B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0407Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the identity of one or more communicating identities is hidden
    • H04L63/0421Anonymous communication, i.e. the party's identifiers are hidden from the other party or parties, e.g. using an anonymizer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42008Systems for anonymous communication between parties, e.g. by use of disposal contact identifiers

Abstract

The invention provides a data processing method, terminal equipment and a computer readable storage medium. The invention provides a data processing method, which comprises the following steps: acquiring user data; processing the user data; and outputting the processed user data. The real information of the user is processed, the anonymous chat effect is achieved, and more selection spaces are provided for the user by adopting a mode of combining home terminal processing, opposite terminal processing and middle-end server processing.

Description

Data processing method, terminal device and computer readable storage medium
Technical Field
The present invention relates to the field of mobile terminal devices, and in particular, to a data processing method, a terminal device, and a computer-readable storage medium.
Background
Anonymous chat is realized by hiding user head images, contact ways and other forms, and sending characters, pictures and the like by users, for example, QQ anonymous chat. However, the current anonymous chat mode is limited to character chat, and pure characters are used for anonymous chat, so that time and trouble are consumed, the playing method is single, and face-to-face anonymous chat cannot be realized. However, when a voice or video call is used, the call is easy to identify, and the effect of anonymous chat cannot be achieved.
Disclosure of Invention
The invention mainly aims to provide a data processing method, aiming at solving the problem that the existing chatting software cannot realize face-to-face anonymous chatting.
In order to achieve the above object, the present invention provides a data processing method, which includes the following steps:
s11: acquiring user data;
s12: processing the user data;
s13: and outputting the processed user data.
Optionally, the step S12 includes:
if the user data comprises audio data, processing the audio data according to a first preset rule, wherein the first preset rule comprises at least one of the following rules:
recognizing sound characteristics of the audio data, performing sound variation processing on the audio data according to the sound characteristics, and outputting the audio data after the sound variation processing;
and shielding the audio data.
Optionally, the step S12 includes:
if the user data comprises a user image, processing the user image according to a second preset rule, wherein the second preset rule comprises at least one of the following:
shielding the user image;
identifying the characteristics of a specific area of the user image, and processing the face area according to the characteristics, wherein:
if the specific area is a face area, identifying the expression characteristics of the face area of the user image, and processing the face area according to the expression characteristics; and/or the presence of a gas in the gas,
and if the specific area is at least one of an eye area, a fingerprint area, a privacy part and a preset fixed area, performing safety privacy processing, wherein the safety privacy processing comprises at least one of deletion, hiding, blurring, mosaic forming and replacement of the specific area into a preset image.
Optionally, the user data includes: at least one of user information, image data, audio data, video data, and text data; and/or the presence of a gas in the gas,
the user data is derived from at least one of the first terminal, the second terminal, and the server.
Optionally, the data processing method further includes,
displaying or hiding the user information; and/or the presence of a gas in the gas,
displaying or hiding a viewing portal for the user information.
In order to achieve the above object, the present invention provides a data processing method applied to at least one terminal, comprising the following steps:
s21: acquiring target data, wherein the target data comprises user data, terminal information of the terminal and/or scene information of the terminal;
s22: processing the target data;
s23: and outputting the processed target data.
Optionally, the step S22 includes:
judging whether the terminal information of the terminal and/or the scene information of the terminal meet a preset rule or not;
and if so, processing the user data according to a preset strategy.
Optionally, the terminal information includes: at least one of electric quantity information, a safety mode and the satisfaction of early warning conditions;
the preset rule comprises at least one of the following items:
the electric quantity of the terminal meets a preset electric quantity condition;
the safety of the terminal meets the preset safety mode condition;
the terminal meets the preset early warning condition.
Optionally, the scene information includes: at least one of bus information, high-speed movement, noise information, wake-up information, and brightness information;
the preset rule comprises at least one of the following items:
the scene meets the conditions of public places, including at least one of subways, buses, trains, squares and markets;
the moving speed meets a preset speed condition;
the noise information meets a preset noise condition;
the awakening information meets a preset awakening condition;
the brightness satisfies preset brightness information.
Optionally, the processing the user data according to a preset policy includes:
if the user data comprises audio data, processing the audio data according to a first preset rule, wherein the first preset rule comprises at least one of the following rules:
recognizing sound characteristics of the audio data, performing sound variation processing on the audio data according to the sound characteristics, and outputting the audio data after the sound variation processing;
and shielding the audio data.
Optionally, the processing the user data according to a preset policy includes:
if the user data comprises a user image, processing the user image according to a second preset rule, wherein the second preset rule comprises at least one of the following:
shielding the user image;
identifying the characteristics of a specific area of the user image, processing the face area according to the characteristics, wherein,
if the specific area is a face area, identifying the expression characteristics of the face area of the user image, and processing the face area according to the expression characteristics; and/or the presence of a gas in the gas,
and if the specific area is at least one of an eye area, a fingerprint area, a privacy part and a preset fixed area, performing safety privacy processing, wherein the safety privacy processing comprises at least one of deletion, hiding, blurring, mosaic forming and replacement of the specific area into a preset image.
In order to achieve the above object, the present invention further provides a terminal device, which is characterized in that the terminal device includes a memory, a processor, and a control program of a data processing method stored in the memory and executable on the processor, and the control program of the data processing method implements the steps of the data processing method when executed by the processor.
To achieve the above object, the present invention further provides a computer-readable storage medium, wherein a control program of a data processing method is stored on the computer-readable storage medium, and the control program of the data processing method realizes the steps of the data processing method as described above when executed by a processor.
The technical scheme of the invention realizes the effect of video anonymous chat by processing the real information of the user. And the mode of combining the local terminal processing, the opposite terminal processing and the middle-end server processing is adopted, more selection spaces are provided for the user, the user can select and adjust the images and the audio by himself to replace the real face images and sounds, and the interestingness of the call process is improved.
Drawings
Fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a data processing method according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating a data processing method according to a second embodiment of the present invention;
FIG. 4 is a flowchart illustrating a data processing method according to a third embodiment of the present invention;
FIG. 5 is a diagram illustrating a communication interface according to a third embodiment of the data processing method of the present invention;
FIG. 6 is a diagram illustrating a communication interface according to a third embodiment of the data processing method of the present invention;
FIG. 7 is a flowchart illustrating a data processing method according to a fourth embodiment of the present invention;
FIG. 8 is a flowchart illustrating a fifth embodiment of a data processing method according to the present invention;
FIG. 9 is a flowchart illustrating a sixth embodiment of a data processing method according to the present invention;
FIG. 10 is a flowchart illustrating a seventh embodiment of a data processing method according to the present invention;
fig. 11 is a flowchart illustrating an eighth embodiment of a data processing method according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and back) are involved in the embodiment of the present invention, the directional indications are only used for explaining the relative positional relationship, the motion situation, and the like between the components in a certain posture, and if the certain posture is changed, the directional indications are changed accordingly.
In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The main solution of the embodiment of the invention is as follows: the method comprises the steps that after a first terminal enters a video call, a user image sent by a second terminal is received; processing a face region of the user image; and displaying the processed user image on a video call interface.
Because the anonymous chat form in the prior art is limited to character chat, the anonymous chat is carried out by pure characters, the time is wasted, the trouble is caused, the playing method is single, and the face-to-face anonymous chat cannot be realized.
The invention provides a data processing method, which comprises the steps of receiving a user image sent by a second terminal after a first terminal enters a video call; processing a face region of the user image; and displaying the processed user image on a video call interface. The technical problem that the existing chatting software cannot realize face-to-face anonymous chatting is solved.
As shown in fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention can be a smart phone, and can also be a tablet, a computer and other intelligent terminals with a photographing function.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display (Display), a camera, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard priority interface, a wireless interface (e.g., a WiFi interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a program of an operating system, a network communication module, and a data processing method may be included in a memory 1005, which is a computer-readable storage medium.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call a control program of the data processing method stored in the memory 1005, and perform the following operations:
s11: acquiring user data;
s12: processing the user data;
s13: and outputting the processed user data.
Further, the processor 1001 may call a control program of the data processing method stored in the memory 1005, and also perform the following operations:
if the user data comprises audio data, processing the audio data according to a first preset rule, wherein the first preset rule comprises at least one of the following rules:
recognizing sound characteristics of the audio data, performing sound variation processing on the audio data according to the sound characteristics, and outputting the audio data after the sound variation processing;
and shielding the audio data.
Further, the processor 1001 may call a control program of the data processing method stored in the memory 1005, and also perform the following operations:
if the user data comprises a user image, processing the user image according to a second preset rule, wherein the second preset rule comprises at least one of the following:
shielding the user image;
identifying the characteristics of a specific area of the user image, and processing the face area according to the characteristics, wherein:
if the specific area is a face area, identifying the expression characteristics of the face area of the user image, and processing the face area according to the expression characteristics; and/or the presence of a gas in the gas,
and if the specific area is at least one of an eye area, a fingerprint area, a privacy part and a preset fixed area, performing safety privacy processing, wherein the safety privacy processing comprises at least one of deletion, hiding, blurring, mosaic forming and replacement of the specific area into a preset image.
Further, the processor 1001 may call a control program of the data processing method stored in the memory 1005, and also perform the following operations:
displaying or hiding the user information; and/or the presence of a gas in the gas,
displaying or hiding a viewing portal for the user information.
The processor 1001 may call a control program of the data processing method stored in the memory 1005, and also perform the following operations:
s21: acquiring target data, wherein the target data comprises user data, terminal information of the terminal and/or scene information of the terminal;
s22: processing the target data;
s23: and outputting the processed target data.
Further, the processor 1001 may call a control program of the data processing method stored in the memory 1005, and also perform the following operations:
judging whether the terminal information of the terminal and/or the scene information of the terminal meet a preset rule or not;
and if so, processing the user data according to a preset strategy.
Further, the processor 1001 may call a control program of the data processing method stored in the memory 1005, and also perform the following operations:
if the user data comprises audio data, processing the audio data according to a first preset rule, wherein the first preset rule comprises at least one of the following rules:
recognizing sound characteristics of the audio data, performing sound variation processing on the audio data according to the sound characteristics, and outputting the audio data after the sound variation processing;
and shielding the audio data.
Further, the processor 1001 may call a control program of the data processing method stored in the memory 1005, and also perform the following operations:
if the user data comprises a user image, processing the user image according to a second preset rule, wherein the second preset rule comprises at least one of the following:
shielding the user image;
identifying the characteristics of a specific area of the user image, processing the face area according to the characteristics, wherein,
if the specific area is a face area, identifying the expression characteristics of the face area of the user image, and processing the face area according to the expression characteristics; and/or the presence of a gas in the gas,
and if the specific area is at least one of an eye area, a fingerprint area, a privacy part and a preset fixed area, performing safety privacy processing, wherein the safety privacy processing comprises at least one of deletion, hiding, blurring, mosaic forming and replacement of the specific area into a preset image.
Based on the hardware architecture, the embodiment of the data processing method is provided.
Referring to fig. 2, fig. 2 is a first embodiment of the data processing method of the present invention, which includes the following steps:
step S11, user data is obtained;
in this embodiment, the user data includes: the video data is the superposition of the audio data and the image data; the audio data and/or the image data are original audio data and/or image data collected by the terminal equipment, and the original audio data and/or the image data reflect real sound of a user and face image information in a real scene.
In this embodiment, the user data is derived from at least one of the first terminal, the second terminal, and the server.
In this embodiment, for convenience of distinguishing, the terminal device in the video call state is divided into a first terminal and a second terminal. The first terminal and the second terminal are respectively terminal equipment for establishing current call connection, and the first terminal can be a call initiating terminal or a called terminal; of course, the second terminal may be a call initiating terminal or a called terminal. When the first terminal is a call initiating terminal, the second terminal is a called terminal. In the process of establishing the current call connection by the terminal, the server can be used as a transmission carrier of the user data, that is, the server can receive the user data sent by the terminal and send the user data to any terminal equipment.
In this embodiment, when the current call is in the group chat state, the first terminal or the second terminal may receive the user data sent by the multiple terminals, the user data sent by the first terminal or the second terminal may also be received by the multiple other terminals, and the user data sent by the first terminal or the second terminal may also be received by the multiple other terminals through the intermediate server.
Step S12, processing the user data;
in this embodiment, if the first terminal acquires the user data, the user data may be sent to the second terminal through the server, and in this process, both the server and the second terminal may receive the user data. Therefore, the user data can be processed at the data acquisition end, namely the first terminal, and also can be processed at the intermediate server or the second terminal.
In this embodiment, after the terminal device or the intermediate server receives the user data, the terminal device or the intermediate server may process the audio data through the audio processing software built in the system, process the image data through the image processing software built in the system, and hide, change, replace, etc. the real audio data and/or the real image data through the processing mode, so that the processed audio data and/or the processed image data cannot reflect the real sound, the real scene information, and the real face image information of the user.
In this embodiment, when the user data received by the terminal device or the server is text data, the text data may be processed by translation, font switching, and the like.
And step S13, outputting the processed user data.
In this embodiment, when the first terminal directly processes the user data acquired by the home terminal, the processed user data may be sent to the second terminal; the processed user data can also be displayed at the first terminal; and after receiving the processed user data sent by the first terminal, the second terminal displays the processed user data, thereby achieving the effect of anonymous video chat.
In this embodiment, when the second terminal processes the acquired user data acquired through the first terminal, the second terminal may display the processed user data on the current call interface, or may send the processed user data back to the first terminal, and the first terminal displays the processed user data after receiving the processed user data sent by the second terminal, thereby achieving the effect of anonymous video chat.
In this embodiment, when the intermediate server processes the acquired user data acquired through the first terminal and/or the second terminal, the intermediate server may send the processed user data to the first terminal and/or the second terminal device, and the first terminal and/or the second terminal device displays the processed user data. The number of terminal devices is not limited.
In this embodiment, the terminal device sends the acquired user data to the intermediate server, and the intermediate server performs set processing on the user data, so that the calculation amount of the terminal for processing audio data, image data and the like is reduced, more operation memories are released, and the user data is sent to the terminal device for the terminal device to directly display the processed user image, thereby achieving the effect of anonymous video chat.
Referring to fig. 3, fig. 3 is a second embodiment of the data processing method of the present invention, and based on the first embodiment, step S12 includes:
step S121, if the user data includes audio data, processing the audio data according to a first preset rule, where the first preset rule includes at least one of:
recognizing sound characteristics of the audio data, performing sound variation processing on the audio data according to the sound characteristics, and outputting the audio data after the sound variation processing;
and shielding the audio data.
In this embodiment, if the user data received by the terminal device or the intermediate server is audio data, the audio data is processed according to a first preset rule, where one of processing manners of the first preset rule is: identifying sound characteristics in the audio data, extracting the frequency, amplitude and the like of the sound, and corresponding to the characteristics of the sound such as the speed, tone and the like; and then carrying out sound changing processing on the audio data according to the sound characteristics. The specific processing mode is various, and the user can select the sound effect corresponding to the currently selected expression to process the received voice information; or selecting corresponding sound effect types from the voice library to process the received voice information. When a user selects a sound effect corresponding to the currently selected expression to process the received voice information, a sound effect button of the current call interface can be directly triggered to be opened, and the sound effect corresponding to the currently selected expression can be output; after a user triggers and opens the sound effect button, different sound effect types can be selected to process received voice information, the user can click a sound effect suspension window displayed on a current call interface, a display screen pops up the sound effect types stored in a voice library, the sound effect types are browsed in a sliding mode, and after an icon corresponding to each sound effect type is clicked, a sound effect corresponding to the currently selected sound effect type can be converted. The user can switch the sound type at any time by clicking the corresponding icon.
In this embodiment, the voice library stores sound types that can be selected by the user, where the sound types may be sound types corresponding to expressions stored in the expression library or sound types that are not associated with the expressions, and the sound types stored in the voice library may also be downloaded and updated through a network or in other manners.
In this embodiment, after the terminal extracts the change parameters of the characteristics of the voice, such as frequency, amplitude, etc., the voice type selected in the voice library is adjusted according to the extracted voice change parameters, that is, the extracted feature transformation parameters of the voice, such as frequency, amplitude, etc., are imported into the selected voice type for data synthesis, so that the selected voice type can present the change of the speaking speed and the intonation of the speaker.
In the embodiment, the sound change process of the character is presented more truly by extracting the characteristic parameters such as the frequency and the amplitude of the real voice and fusing the sound types selected from the rest voice libraries.
In this embodiment, when receiving audio data, the terminal device or the intermediate server may not process the received voice information according to the selection of the user, and at this time, the effect of anonymous call is not affected.
In this embodiment, another processing manner of the first preset rule is as follows: and shielding the audio data. In the process of communication, the voice channel is directly closed, the user does not receive the audio data any more, and under the condition, after the audio data are shielded, the system can jump to a text data or image data communication state to carry out normal communication; when the original communication state is a video call state, the video data is the superposition of the audio data and the image data, so that the system continues to keep the transmission of the image data after shielding the audio data, and at the moment, the user can also select whether to recover the transmission of the audio data or select the text data for calling.
Referring to fig. 4 to 6, fig. 4 is a third embodiment of the data processing method of the present invention, and based on the first or second embodiment, step S12 further includes:
step S122, if the user data includes a user image, processing the user image according to a second preset rule, where the second preset rule includes at least one of the following:
shielding the user image;
identifying the characteristics of a specific area of the user image, and processing the face area according to the characteristics, wherein:
if the specific area is a face area, identifying the expression characteristics of the face area of the user image, and processing the face area according to the expression characteristics; and/or the presence of a gas in the gas,
and if the specific area is at least one of an eye area, a fingerprint area, a privacy part and a preset fixed area, performing safety privacy processing, wherein the safety privacy processing comprises at least one of deletion, hiding, blurring, mosaic forming and replacement of the specific area into a preset image.
In this embodiment, if the user data received by the terminal device or the intermediate server includes a user image, the user image is processed according to a second preset rule, where one of processing manners of the second preset rule is: shielding the user image; in the communication process, the image channel is directly closed, the user does not receive the user image data any more, and under the condition, after the image data is shielded, the system can jump to a text data or audio data communication state to carry out normal communication; when the original communication state is a video call state, the video data is the superposition of the audio data and the image data, so that the system continues to keep the transmission of the audio data after the image data is shielded, and at the moment, a user can also select whether to recover the transmission of the image data or select text data to replace the audio data for calling.
In this embodiment, another processing manner of the second preset rule is as follows: and identifying the characteristics of the specific area of the user image, and processing the face area according to the characteristics. After receiving the user image, the terminal equipment or the intermediate server only processes the face area of the image to achieve the effect of anonymous video chat, and adopts a face recognition technology to recognize the expression characteristics of the face area so as to extract the dynamic change parameters of the expression.
In this embodiment, after the terminal device or the intermediate server extracts the expression dynamic change parameters of the dynamic image, the expression selected in the expression library is adjusted according to the extracted expression dynamic change parameters, and a dynamic expression that can reflect facial expression changes of a human face is synthesized.
In the embodiment, the AR expression is adopted to replace the face image, and the AR expression can form the corresponding dynamic expression according to the dynamic change of the facial expression of the face, so that the change process of the facial expression of the caller is reflected, and a good animation effect is presented.
In the embodiment, the facial expression change process of the person is presented more truly by capturing the dynamic image expression and fusing the dynamic image expression with the selected expression in the expression library.
In this embodiment, referring to fig. 5, fig. 5 is an embodiment of a terminal display interface when user data includes a user image, when a user is in a one-to-one call mode, two video windows are provided on a display screen and are respectively used for displaying a home terminal user image and an opposite terminal user image, and the window sizes of the video windows are adjustable or switchable. Namely, clicking the video window 2, the position of the video window 2 can be exchanged with that of the video window 1, or pulling the frame of the video window 2, and the window size of the video window 2 can be zoomed. For convenience of description, a display screen in the figure is set as an operation display interface of the first terminal, display content of a video window 1 in the figure is an image of the first terminal, and display content of a video window 2 is an image of the second terminal. The display screen is provided with expression buttons for the user to select, and the user selects one of the expression buttons to complete the processing of the current video window image. When the local terminal processing is adopted, namely the first terminal collects the user image and processes the user image, at this time, the collected real image of the user can be displayed in the video window 1, the user can select any expression at the corresponding position of the display screen according to the requirement to process the image in the video window 1, the image in the video window 2 is the received image processed by the second terminal, the user of the first terminal cannot process the image in the video window 2, and only the size of the video window 2 can be zoomed or the video window 2 can be exchanged with the position of the video window 1.
When the opposite-end processing is adopted, namely the first terminal processes the image collected by the second terminal, in order to achieve an anonymous effect, the display screen of the first terminal cannot display the acquired real information of the image collected by the second terminal, therefore, referring to fig. 6, before entering a video call interface, a dialog box pops up on the screen of the first terminal, the first terminal user is instructed to select a default effect to process the received image collected by the second terminal or select other expressions to process the received image collected by the second terminal, after the user makes a selection, the system processes the image according to an instruction selected by the user and enters the communication interface, at this time, the user of the first terminal can only process the image in the video window 2 and cannot process the image in the video window 1.
When the intermediate server is used for processing, in order to achieve the anonymous effect, the display screen of the first terminal cannot display the acquired real information of the image acquired by the second terminal, so that, before entering a video call interface, a dialog box pops up on a screen of a first terminal to instruct a user of the first terminal to select a default effect to process a received image acquired by a second terminal or select other expressions to process the received image acquired by the second terminal, after the user selects, a system processes the image according to an instruction selected by the user, and enters a communication interface, at the moment, the user can process the image in the video window 1 and can also process the image in the video window 2, in the processing process, the video window 1 of the current display interface is used as a main window, and the expression button can be directly selected to process the image in the video window 1; if the image in the video window 2 is to be processed, the video window 2 is clicked, the position exchange between the video window 2 and the video window 1 is completed, and the image in the video window 2 can be processed by selecting any expression button.
In this embodiment, the user can select different expressions from the expression library at any time to replace the currently selected expression. The expressions stored in the expression library can also be downloaded and updated through a network or other modes.
In this embodiment, the user may perform blurring processing on the acquired real image in a mosaic form, or cover the acquired real image with a picture, in addition to the real image acquired by expression processing, so as to achieve the purpose of performing anonymous processing on the real image.
Referring to fig. 7, fig. 7 is a fourth embodiment of the data processing method according to the present invention, and based on any one of the first to third embodiments, the data processing method further includes the following steps:
step S14, displaying or hiding the user information; and/or the presence of a gas in the gas,
displaying or hiding a viewing portal for the user information.
In this embodiment, after the user enters the communication state, if the terminal device is used to process the user data, in order to achieve the effect of anonymous chat, the terminal device hides the user information when acquiring the user information of the other party, so that the user cannot view the real information of the other party; or the user information viewing entrance of the opposite user is hidden, and the user information viewing interface is locked, namely, the terminal user cannot click and open the user information bar of the opposite user, and the user cannot view the real information of the opposite user.
In this embodiment, after the terminal hides the obtained user information of the other party or hides the user information viewing entry of the login user, if the authorized disclosure instruction of the user of the other party is obtained, the hidden user information can be recovered, so that the terminal user can view the user information of the user of the other party.
In this embodiment, after the user enters the communication state, if the intermediate server is used to process the user data, in order to achieve the effect of anonymous chat, the intermediate server hides or encrypts the user information after obtaining the user information of the terminal, or only displays the user information at the intermediate server, so that the user cannot view the real information of the other party, thereby ensuring the anonymous effect of communication.
In this embodiment, after the intermediate server hides the user information obtained from the first terminal and the second terminal, if the authorized disclosure instruction of the terminal user is obtained, the hidden user information can be recovered, so that the terminal user can view the user information of the opposite user.
Referring to fig. 8, fig. 8 is a fifth embodiment of the data processing method of the present invention, which includes the steps of:
step S21, acquiring target data, wherein the target data comprises user data, terminal information of the terminal and/or scene information of the terminal;
in this embodiment, the target data includes user data, terminal information of the terminal device (i.e., information reflecting the current state of the terminal device), and scene information where the terminal device is located. The user data includes: the video data is the superposition of the audio data and the image data; the audio data and/or the image data are original audio data and/or image data collected by the terminal equipment, and the original audio data and/or the image data reflect real sound of a user and face image information in a real scene.
In this embodiment, the user data is derived from at least one of the first terminal, the second terminal, and the server.
In this embodiment, for convenience of distinguishing, the terminal device in the video call state is divided into a first terminal and a second terminal. The first terminal and the second terminal are respectively terminal equipment for establishing current call connection, and the first terminal can be a call initiating terminal or a called terminal; of course, the second terminal may be a call initiating terminal or a called terminal. When the first terminal is a call initiating terminal, the second terminal is a called terminal. In the process of establishing the current call connection by the terminal, the server can be used as a transmission carrier of the user data, that is, the server can receive the user data sent by the terminal and send the user data to any terminal equipment.
In this embodiment, when the current call is in the group chat state, the first terminal or the second terminal may receive the user data sent by the multiple terminals, the user data sent by the first terminal or the second terminal may also be received by the multiple other terminals, and the user data sent by the first terminal or the second terminal may also be received by the multiple other terminals through the intermediate server.
Step S22, processing the target data;
in this embodiment, if the first terminal acquires the target data, the target data may be sent to the second terminal through the server, and in this process, both the server and the second terminal may receive the target data. Therefore, the user data can be processed at the data acquisition end, namely the first terminal, and also can be processed at the intermediate server or the second terminal.
In this embodiment, after the terminal device or the server receives the user data, the terminal device or the server may process the audio data through the audio processing software built in the system, process the image data through the image processing software built in the system, and hide, change, replace, etc. the real audio data and/or the real image data through the processing method, so that the processed audio data and/or the processed image data cannot reflect the real sound, the real scene information, and the real face image information of the user.
In this embodiment, when the user data received by the terminal device or the server is text data, the text data may be processed by translation, font switching, and the like.
And step S23, outputting the processed target data.
In this embodiment, when the first terminal directly processes the user data acquired by the home terminal, the processed user data may be sent to the second terminal; the processed user data can also be displayed at the first terminal; and after receiving the processed user data sent by the first terminal, the second terminal displays the processed user data, thereby achieving the effect of anonymous video chat.
In this embodiment, when the second terminal processes the acquired user data acquired through the first terminal, the second terminal may display the processed user data on the current call interface, or may send the processed user data back to the first terminal, and the first terminal displays the processed user data after receiving the processed user data sent by the second terminal, thereby achieving the effect of anonymous video chat.
In this embodiment, when the intermediate server processes the acquired user data acquired through the first terminal and/or the second terminal, the intermediate server may send the processed user data to the first terminal and/or the second terminal device, and the first terminal and/or the second terminal device displays the processed user data. The number of terminal devices is not limited.
In this embodiment, the terminal device sends the acquired user data to the intermediate server, and the intermediate server performs set processing on the user data, so that the calculation amount of the terminal for processing audio data, image data and the like is reduced, more operation memories are released, and the user data is sent to the terminal device for the terminal device to directly display the processed user image, thereby achieving the effect of anonymous video chat.
Referring to fig. 9, fig. 9 is a sixth embodiment of the data processing method of the present invention, and based on the fifth embodiment, step S22 includes:
step S221, judging whether the terminal information of the terminal and/or the scene information of the terminal meet a preset rule;
in this embodiment, the terminal information includes: at least one of electric quantity, safety mode and satisfaction of early warning conditions; the scene information includes: at least one of public transport, high-speed movement, noise information, wake-up information and brightness information.
In this embodiment, after the terminal information and/or the scene information are acquired, whether the terminal information and/or the scene information meet a preset rule is determined. The preset rule comprises at least one of the following items: the electric quantity of the terminal meets a preset electric quantity condition; the safety of the terminal meets the preset safety mode condition; the terminal meets a preset early warning condition; the scene meets the conditions of public places, including at least one of subways, buses, trains, squares and markets; the moving speed meets a preset speed condition; the noise information meets a preset noise condition; the awakening information meets a preset awakening condition; the brightness satisfies preset brightness information.
Step S222, if the preset rule is satisfied, processing the user data according to a preset policy.
In this embodiment, if the terminal information and/or the scene information satisfy the preset rule, indicating that the current terminal device satisfies the condition of entering the communication state, step S12 may be executed: and processing the user data.
Referring to fig. 10, fig. 10 is a seventh embodiment of the data processing method of the present invention, and based on the sixth embodiment, step S222 includes:
step S2221, if the user data includes audio data, processing the audio data according to a first preset rule, where the first preset rule includes at least one of:
recognizing sound characteristics of the audio data, performing sound variation processing on the audio data according to the sound characteristics, and outputting the audio data after the sound variation processing;
and shielding the audio data.
In this embodiment, if the user data received by the terminal device or the intermediate server is audio data, the audio data is processed according to a first preset rule, where one of processing manners of the first preset rule is: identifying sound characteristics in the audio data, extracting the frequency, amplitude and the like of the sound, and corresponding to the characteristics of the sound such as the speed, tone and the like; and then carrying out sound changing processing on the audio data according to the sound characteristics. The specific processing mode is various, and the user can select the sound effect corresponding to the currently selected expression to process the received voice information; or selecting corresponding sound effect types from the voice library to process the received voice information. When a user selects a sound effect corresponding to the currently selected expression to process the received voice information, a sound effect button of the current call interface can be directly triggered to be opened, and the sound effect corresponding to the currently selected expression can be output; after a user triggers and opens the sound effect button, different sound effect types can be selected to process received voice information, the user can click a sound effect suspension window displayed on a current call interface, a display screen pops up the sound effect types stored in a voice library, the sound effect types are browsed in a sliding mode, and after an icon corresponding to each sound effect type is clicked, a sound effect corresponding to the currently selected sound effect type can be converted. The user can switch the sound type at any time by clicking the corresponding icon.
In this embodiment, the voice library stores sound types that can be selected by the user, where the sound types may be sound types corresponding to expressions stored in the expression library or sound types that are not associated with the expressions, and the sound types stored in the voice library may also be downloaded and updated through a network or in other manners.
In this embodiment, after the terminal extracts the change parameters of the characteristics of the voice, such as frequency, amplitude, etc., the voice type selected in the voice library is adjusted according to the extracted voice change parameters, that is, the extracted feature transformation parameters of the voice, such as frequency, amplitude, etc., are imported into the selected voice type for data synthesis, so that the selected voice type can present the change of the speaking speed and the intonation of the speaker.
In the embodiment, the sound change process of the character is presented more truly by extracting the characteristic parameters such as the frequency and the amplitude of the real voice and fusing the sound types selected from the rest voice libraries.
In this embodiment, when receiving audio data, the terminal device or the intermediate server may not process the received voice information according to the selection of the user, and at this time, the effect of anonymous call is not affected.
In this embodiment, another processing manner of the first preset rule is as follows: and shielding the audio data.
Referring to fig. 11, fig. 11 is an eighth embodiment of the data processing method of the present invention, and based on the fifth or sixth embodiment, step S222 further includes:
step S2222, if the user data includes a user image, processing the user image according to a second preset rule, where the second preset rule includes at least one of:
shielding the user image;
identifying the characteristics of a specific area of the user image, processing the face area according to the characteristics, wherein,
if the specific area is a face area, identifying the expression characteristics of the face area of the user image, and processing the face area according to the expression characteristics; and/or the presence of a gas in the gas,
and if the specific area is at least one of an eye area, a fingerprint area, a privacy part and a preset fixed area, performing safety privacy processing, wherein the safety privacy processing comprises at least one of deletion, hiding, blurring, mosaic forming and replacement of the specific area into a preset image.
In this embodiment, if the user data received by the terminal device or the intermediate server includes a user image, the user image is processed according to a second preset rule, where one of processing manners of the second preset rule is: shielding the user image;
in this embodiment, another processing manner of the second preset rule is as follows: and identifying the characteristics of the specific area of the user image, and processing the face area according to the characteristics. After receiving the user image, the terminal equipment or the intermediate server only processes the face area of the image to achieve the effect of anonymous video chat, and adopts a face recognition technology to recognize the expression characteristics of the face area so as to extract the dynamic change parameters of the expression.
In this embodiment, after the terminal device or the intermediate server extracts the expression dynamic change parameters of the dynamic image, the expression selected in the expression library is adjusted according to the extracted expression dynamic change parameters, and a dynamic expression that can reflect facial expression changes of a human face is synthesized.
In the embodiment, the AR expression is adopted to replace the face image, and the AR expression can form the corresponding dynamic expression according to the dynamic change of the facial expression of the face, so that the change process of the facial expression of the caller is reflected, and a good animation effect is presented.
In the embodiment, the facial expression change process of the person is presented more truly by capturing the dynamic image expression and fusing the dynamic image expression with the selected expression in the expression library.
In this embodiment, the user can select different expressions from the expression library at any time to replace the currently selected expression. The expressions stored in the expression library can also be downloaded and updated through a network or other modes.
In this embodiment, the user may perform blurring processing on the acquired real image in a mosaic form, or cover the acquired real image with a picture, in addition to the real image acquired by expression processing, so as to achieve the purpose of performing anonymous processing on the real image.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (13)

1. A data processing method, characterized by comprising the steps of:
s11: acquiring user data;
s12: processing the user data;
s13: and outputting the processed user data.
2. The method of claim 1, wherein the step of S12 includes:
if the user data comprises audio data, processing the audio data according to a first preset rule, wherein the first preset rule comprises at least one of the following rules:
recognizing sound characteristics of the audio data, performing sound variation processing on the audio data according to the sound characteristics, and outputting the audio data after the sound variation processing;
and shielding the audio data.
3. The method of claim 1, wherein the step of S12 includes:
if the user data comprises a user image, processing the user image according to a second preset rule, wherein the second preset rule comprises at least one of the following:
shielding the user image;
identifying the characteristics of a specific area of the user image, and processing the face area according to the characteristics, wherein:
if the specific area is a face area, identifying the expression characteristics of the face area of the user image, and processing the face area according to the expression characteristics; and/or the presence of a gas in the gas,
and if the specific area is at least one of an eye area, a fingerprint area, a privacy part and a preset fixed area, performing safety privacy processing, wherein the safety privacy processing comprises at least one of deletion, hiding, blurring, mosaic forming and replacement of the specific area into a preset image.
4. A method according to any one of claims 1 to 3, wherein the user data comprises: at least one of user information, image data, audio data, video data, and text data; and/or the presence of a gas in the gas,
the user data is derived from at least one of the first terminal, the second terminal, and the server.
5. The method of claim 4, further comprising,
displaying or hiding the user information; and/or the presence of a gas in the gas,
displaying or hiding a viewing portal for the user information.
6. A data processing method is applied to at least one terminal, and is characterized by comprising the following steps:
s21: acquiring target data, wherein the target data comprises user data, terminal information of the terminal and/or scene information of the terminal;
s22: processing the target data;
s23: and outputting the processed target data.
7. The method according to claim 6, wherein the step S22 includes:
judging whether the terminal information of the terminal and/or the scene information of the terminal meet a preset rule or not;
and if so, processing the user data according to a preset strategy.
8. The method of claim 7, wherein the terminal information comprises: at least one of electric quantity information, a safety mode and the satisfaction of early warning conditions;
the preset rule comprises at least one of the following items:
the electric quantity of the terminal meets a preset electric quantity condition;
the safety of the terminal meets the preset safety mode condition;
the terminal meets the preset early warning condition.
9. The method of claim 7, wherein the scene information comprises: at least one of bus information, high-speed movement, noise information, wake-up information, and brightness information;
the preset rule comprises at least one of the following items:
the scene meets the conditions of public places, including at least one of subways, buses, trains, squares and markets;
the moving speed meets a preset speed condition;
the noise information meets a preset noise condition;
the awakening information meets a preset awakening condition;
the brightness satisfies preset brightness information.
10. The method according to any of claims 7 to 9, wherein said processing said user data according to a preset policy comprises:
if the user data comprises audio data, processing the audio data according to a first preset rule, wherein the first preset rule comprises at least one of the following rules:
recognizing sound characteristics of the audio data, performing sound variation processing on the audio data according to the sound characteristics, and outputting the audio data after the sound variation processing;
and shielding the audio data.
11. The method according to any of claims 7 to 9, wherein said processing said user data according to a preset policy comprises:
if the user data comprises a user image, processing the user image according to a second preset rule, wherein the second preset rule comprises at least one of the following:
shielding the user image;
identifying the characteristics of a specific area of the user image, processing the face area according to the characteristics, wherein,
if the specific area is a face area, identifying the expression characteristics of the face area of the user image, and processing the face area according to the expression characteristics; and/or the presence of a gas in the gas,
and if the specific area is at least one of an eye area, a fingerprint area, a privacy part and a preset fixed area, performing safety privacy processing, wherein the safety privacy processing comprises at least one of deletion, hiding, blurring, mosaic forming and replacement of the specific area into a preset image.
12. A terminal device, characterized in that the terminal device comprises a memory, a processor, and a control program of a data processing method stored on the memory and executable on the processor, the control program of the data processing method realizing the steps of the data processing method according to any one of claims 1 to 5 or 6 to 11 when executed by the processor.
13. A computer-readable storage medium, characterized in that a control program of a data processing method is stored on the computer-readable storage medium, which when executed by a processor implements the steps of the data processing method according to any one of claims 1 to 5 or 6 to 11.
CN201911035248.0A 2019-10-28 2019-10-28 Data processing method, terminal device and computer readable storage medium Active CN110784676B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911035248.0A CN110784676B (en) 2019-10-28 2019-10-28 Data processing method, terminal device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911035248.0A CN110784676B (en) 2019-10-28 2019-10-28 Data processing method, terminal device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110784676A true CN110784676A (en) 2020-02-11
CN110784676B CN110784676B (en) 2023-10-03

Family

ID=69387207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911035248.0A Active CN110784676B (en) 2019-10-28 2019-10-28 Data processing method, terminal device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110784676B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023208090A1 (en) * 2022-04-28 2023-11-02 Neufast Limited Method and system for personal identifiable information removal and data processing of human multimedia
WO2024021922A1 (en) * 2022-07-26 2024-02-01 中兴通讯股份有限公司 Video call method, electronic device, and storage medium

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000175168A (en) * 1998-12-01 2000-06-23 Matsushita Electric Ind Co Ltd Substitute image communication system and its method
CN1532775A (en) * 2003-03-19 2004-09-29 ���µ�����ҵ��ʽ���� Visuable telephone terminal
CN101697580A (en) * 2009-10-30 2010-04-21 中兴通讯股份有限公司 Method and device for realizing videophone
CN103198508A (en) * 2013-04-07 2013-07-10 河北工业大学 Human face expression animation generation method
CN104333730A (en) * 2014-11-26 2015-02-04 北京奇艺世纪科技有限公司 Video communication method and video communication device
CN104378577A (en) * 2013-08-16 2015-02-25 联想(北京)有限公司 Information processing method and electronic equipment
CN104935496A (en) * 2014-03-19 2015-09-23 腾讯科技(深圳)有限公司 Instant messaging method, system, device and instant messaging terminal
CN105654537A (en) * 2015-12-30 2016-06-08 中国科学院自动化研究所 Expression cloning method and device capable of realizing real-time interaction with virtual character
CN106028114A (en) * 2016-05-19 2016-10-12 浙江大华技术股份有限公司 Witness protection method and device for collecting audio/video evidence in real time
CN106055996A (en) * 2016-05-18 2016-10-26 维沃移动通信有限公司 Method and mobile terminal for multimedia information sharing
CN106851171A (en) * 2017-02-21 2017-06-13 福建江夏学院 Intimacy protection system and method are realized in video calling
CN107154069A (en) * 2017-05-11 2017-09-12 上海微漫网络科技有限公司 A kind of data processing method and system based on virtual role
CN107333086A (en) * 2016-04-29 2017-11-07 掌赢信息科技(上海)有限公司 A kind of method and device that video communication is carried out in virtual scene
CN107911644A (en) * 2017-12-04 2018-04-13 吕庆祥 The method and device of video calling is carried out based on conjecture face expression
CN109120985A (en) * 2018-10-11 2019-01-01 广州虎牙信息科技有限公司 Image display method, apparatus and storage medium in live streaming
CN109639999A (en) * 2018-12-28 2019-04-16 努比亚技术有限公司 Optimization method, mobile terminal and the readable storage medium storing program for executing of video call data
CN110278140A (en) * 2018-03-14 2019-09-24 阿里巴巴集团控股有限公司 The means of communication and device

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000175168A (en) * 1998-12-01 2000-06-23 Matsushita Electric Ind Co Ltd Substitute image communication system and its method
CN1532775A (en) * 2003-03-19 2004-09-29 ���µ�����ҵ��ʽ���� Visuable telephone terminal
CN101697580A (en) * 2009-10-30 2010-04-21 中兴通讯股份有限公司 Method and device for realizing videophone
CN103198508A (en) * 2013-04-07 2013-07-10 河北工业大学 Human face expression animation generation method
CN104378577A (en) * 2013-08-16 2015-02-25 联想(北京)有限公司 Information processing method and electronic equipment
CN104935496A (en) * 2014-03-19 2015-09-23 腾讯科技(深圳)有限公司 Instant messaging method, system, device and instant messaging terminal
CN104333730A (en) * 2014-11-26 2015-02-04 北京奇艺世纪科技有限公司 Video communication method and video communication device
CN105654537A (en) * 2015-12-30 2016-06-08 中国科学院自动化研究所 Expression cloning method and device capable of realizing real-time interaction with virtual character
CN107333086A (en) * 2016-04-29 2017-11-07 掌赢信息科技(上海)有限公司 A kind of method and device that video communication is carried out in virtual scene
CN106055996A (en) * 2016-05-18 2016-10-26 维沃移动通信有限公司 Method and mobile terminal for multimedia information sharing
CN106028114A (en) * 2016-05-19 2016-10-12 浙江大华技术股份有限公司 Witness protection method and device for collecting audio/video evidence in real time
CN106851171A (en) * 2017-02-21 2017-06-13 福建江夏学院 Intimacy protection system and method are realized in video calling
CN107154069A (en) * 2017-05-11 2017-09-12 上海微漫网络科技有限公司 A kind of data processing method and system based on virtual role
CN107911644A (en) * 2017-12-04 2018-04-13 吕庆祥 The method and device of video calling is carried out based on conjecture face expression
CN110278140A (en) * 2018-03-14 2019-09-24 阿里巴巴集团控股有限公司 The means of communication and device
CN109120985A (en) * 2018-10-11 2019-01-01 广州虎牙信息科技有限公司 Image display method, apparatus and storage medium in live streaming
CN109639999A (en) * 2018-12-28 2019-04-16 努比亚技术有限公司 Optimization method, mobile terminal and the readable storage medium storing program for executing of video call data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023208090A1 (en) * 2022-04-28 2023-11-02 Neufast Limited Method and system for personal identifiable information removal and data processing of human multimedia
WO2024021922A1 (en) * 2022-07-26 2024-02-01 中兴通讯股份有限公司 Video call method, electronic device, and storage medium

Also Published As

Publication number Publication date
CN110784676B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
CN112218103B (en) Live broadcast room interaction method and device, electronic equipment and storage medium
CN103916536B (en) For the user interface method and system in mobile terminal
CN108876877A (en) Emoticon image
CN113099298B (en) Method and device for changing virtual image and terminal equipment
US20190222806A1 (en) Communication system and method
CN110827378A (en) Virtual image generation method, device, terminal and storage medium
US20140139619A1 (en) Communication method and device for video simulation image
US20090002479A1 (en) Methods and terminals that control avatars during videoconferencing and other communications
CN106254311A (en) Live broadcasting method and device, live data streams methods of exhibiting and device
US20090157223A1 (en) Robot chatting system and method
CN104335575A (en) Method, server, and terminal for conducting a video conference
JP2016537922A (en) Pseudo video call method and terminal
US20210241465A1 (en) Expression transfer across telecommunications networks
CN109614902A (en) Face image processing process, device, electronic equipment and computer storage medium
CN107948708A (en) Barrage methods of exhibiting and device
WO2021023047A1 (en) Facial image processing method and device, terminal, and storage medium
CN113163253B (en) Live broadcast interaction method and device, electronic equipment and readable storage medium
CN110784676B (en) Data processing method, terminal device and computer readable storage medium
US20200236301A1 (en) Systems and methods for providing personalized videos featuring multiple persons
WO2020150690A2 (en) Systems and methods for providing personalized videos
CN107786427B (en) Information interaction method, terminal and computer readable storage medium
CN105120366A (en) A presentation method for an image local enlarging function in video call
CN108040280A (en) Content item display methods and device, storage medium
KR20110099414A (en) Apparatus and method for providing animation effect in portable terminal
US11348264B1 (en) Capturing content on writing surfaces using depth and image sensing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant