CN112218034A - Video processing method, system, terminal and storage medium - Google Patents

Video processing method, system, terminal and storage medium Download PDF

Info

Publication number
CN112218034A
CN112218034A CN202011090106.7A CN202011090106A CN112218034A CN 112218034 A CN112218034 A CN 112218034A CN 202011090106 A CN202011090106 A CN 202011090106A CN 112218034 A CN112218034 A CN 112218034A
Authority
CN
China
Prior art keywords
target
image
terminal
key point
point information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011090106.7A
Other languages
Chinese (zh)
Inventor
徐铭鑫
李辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202011090106.7A priority Critical patent/CN112218034A/en
Publication of CN112218034A publication Critical patent/CN112218034A/en
Priority to PCT/CN2021/113971 priority patent/WO2022078066A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS

Abstract

The present disclosure relates to the field of computer technologies, and in particular, to a video processing method, system, terminal, and storage medium. The video processing method provided by the present disclosure includes: acquiring a first target image; sending the first target image to a second terminal; acquiring a second target image; determining second target key point information according to the second target image; sending second target key point information to a second terminal, so that the second terminal processes the first target image based on the second target key point information to obtain a second target simulation image; the first target image, the second target key point information and the second target simulation image all correspond to the same target.

Description

Video processing method, system, terminal and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a video processing method, system, terminal, and storage medium.
Background
The video conference system generally compresses video according to an h.264 or h.265 video encoding standard and then transmits the video through an RTMP (Real Time Messaging Protocol) or RTSP (Real Time Streaming Protocol), so that the video conference system has a large transmission data volume and high requirements for network bandwidth and stability. Under the environment that the network is unstable or the bandwidth is lower, the problems of high network delay, unclear screen display, even equipment disconnection and the like exist.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
According to one or more embodiments of the present disclosure, there is provided a video processing method including:
acquiring a first target image;
sending the first target image to a second terminal;
acquiring a second target image;
determining second target key point information according to the second target image;
sending the second target key point information to the second terminal, so that the second terminal processes the first target image based on the second target key point information to obtain a second target simulation image;
the first target image, the second target key point information and the second target simulation image all correspond to the same target.
According to one or more embodiments of the present disclosure, there is provided a video processing method including:
acquiring a first target image;
receiving second target key point information;
processing the first target image based on the second target key point information to obtain a second target simulation image;
displaying the second target simulation image;
and the second target key point information, the first target image and the second target simulation image correspond to the same target.
According to one or more embodiments of the present disclosure, there is provided a first terminal including:
a first target image acquisition unit configured to acquire a first target image;
the first target image sending unit is used for sending the first target image to a second terminal;
a second target image acquisition unit for acquiring a second target image;
the key point information determining unit is used for determining second target key point information according to the second target image;
a key point information sending unit, configured to send the second target key point information to the second terminal, so that the second terminal processes the first target image based on the second target key point information to obtain a second target simulation image;
the first target image, the second target key point information and the second target simulation image all correspond to the same target.
According to one or more embodiments of the present disclosure, there is provided a second terminal including:
a first image acquisition unit for acquiring a first target image;
a key point information receiving unit, configured to receive second target key point information;
the image processing unit is used for processing the first target image based on the second target key point information to obtain a second target simulation image;
a display unit for displaying the second target simulation image;
and the first target image, the second target key point information and the second target simulation image correspond to the same target.
In accordance with one or more embodiments of the present disclosure, there is provided a system comprising:
a first terminal provided in accordance with one or more embodiments of the present disclosure; and
a second terminal is provided in accordance with one or more embodiments of the present disclosure.
According to one or more embodiments of the present disclosure, there is provided a terminal including:
at least one memory and at least one processor;
wherein the memory is configured to store program code, and the processor is configured to call the program code stored in the memory to perform a video processing method provided according to one or more embodiments of the present disclosure.
According to one or more embodiments of the present disclosure, there is provided a non-transitory computer storage medium storing program code executable by a computer device to cause the computer device to perform a video processing method provided according to one or more embodiments of the present disclosure.
In this way, according to the video processing method provided by the embodiment of the disclosure, the first target image and the second target key point information are sent to the second terminal, so that the second terminal processes the first target image based on the second target key point information to obtain a second target simulation image similar to the second target image, thereby realizing real-time image transmission with extremely low data volume, and further performing video conversation under the condition of poor network environment.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a video processing method according to another embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a system provided in accordance with an embodiment of the present disclosure;
fig. 4 is a signal flow diagram of a system provided in accordance with an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a first terminal according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a second terminal according to another embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a terminal device for implementing an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the steps recited in the apparatus embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Moreover, device embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". The term "responsive to" and related terms mean that one signal or event is affected to some extent, but not necessarily completely or directly, by another signal or event. If an event x occurs "in response" to an event y, x may respond directly or indirectly to y. For example, the occurrence of y may ultimately result in the occurrence of x, but other intermediate events and/or conditions may exist. In other cases, y may not necessarily result in the occurrence of x, and x may occur even though y has not already occurred. Furthermore, the term "responsive to" may also mean "at least partially responsive to". The term "determining" broadly encompasses a wide variety of actions that can include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like, and can also include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like, as well as resolving, selecting, choosing, establishing and the like. Relevant definitions for other terms will be given in the following description. Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
For the purposes of this disclosure, the phrase "a and/or B" means (a), (B), or (a and B).
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The video processing method provided according to one or more embodiments of the present disclosure is applied to a first terminal including, but not limited to, mobile terminals and fixed terminals such as a digital TV, a desktop computer, a notebook computer, a PAD (tablet), a smart watch, a mobile phone, a digital broadcast receiver, a PDA (personal digital assistant), a PMP (portable multimedia player), and the like, which may transmit image data in a wired or wireless manner.
Referring to fig. 1, fig. 1 shows a flowchart of a video processing method 100 provided by an embodiment of the present disclosure, which includes steps S101 to S105:
step S101: a first target image is acquired.
The first target image is an image containing a target. In some embodiments, the target image is a face image.
In some embodiments, the first target image is acquired by an image capture device of the imaging apparatus. For example, the first terminal starts a front camera to acquire a current target image.
In some embodiments, the first target image may be stored in the first terminal in advance, or stored in the cloud server, and sent to the first terminal by the server. Illustratively, the first target image may be a frontal face image of the user or a head portrait of the user.
Step S102: and sending the first target image to a second terminal.
In this step, the first terminal may send the acquired first target image to the second terminal directly or via at least one intermediate server in a wired or wireless manner.
Step S103: a second target image is acquired.
In this step, the second target image may be acquired by an image capturing device of the image apparatus.
Step S104: and determining second target key point information according to the second target image.
For example, the second target keypoint information may be determined from the second target image by using a Model such as an ASM (Active Shape Model), an AAM (Active Appearance Model), a CPR (Cascaded position Regression), and a deep convolutional neural network.
Step S105: and sending the second target key point information to the second terminal, so that the second terminal processes the first target image based on the second target key point information to obtain a second target simulation image.
The first target image, the second target key point information and the second target simulation image all correspond to the same target.
In order to further clarify the technical solution of the embodiment of the present disclosure, a video call is taken as an application scenario for description. According to one or more embodiments of the present disclosure, when a user a initiates a video call to a user B using a first terminal (e.g., a mobile phone), the mobile phone may capture a face image (i.e., a first target image) of the user a through a front camera and send the face image to a terminal (i.e., a second terminal) of the user B, and may also send the face image used by the user a as the first target image to the second terminal. On the basis, in the subsequent video call process, the first terminal does not need to send video data of the user a, which contains a real-time image (a second target image), to the second terminal, but can send human face characteristic point information (namely second target key point information) of the user a, which is generated according to the second target image, to the second terminal, and the second terminal can obtain a simulated image similar to the second target image by processing the first target image according to the second target key point information, so that the simulated image is presented to the user B instead of the second target image. In other words, after the first terminal sends the initial face image to the second terminal, the visual effect of the "video image" of the user a can be presented on the second terminal only by sending the face feature point information generated according to the face image captured in real time by the face image, but not by sending the face image captured in real time.
In this way, according to the video processing method provided by the embodiment of the disclosure, the first target image and the second target key point information are sent to the second terminal, so that the second terminal processes the first target image based on the second target key point information to obtain a second target simulation image similar to the second target image, thereby realizing real-time image transmission with extremely low data volume, and further performing video conversation under the condition of poor network environment.
In some embodiments, the method 100 further comprises: acquiring video data by an image capture device; step S103: a second target image is extracted from the video data. Illustratively, the second target image is a certain image frame that may be a video.
In some embodiments, step S101: a first target image is extracted from the video data. Illustratively, the first target image is a certain image frame that may be a video.
In some embodiments, step S101 comprises: and when the first terminal establishes a video connection with the second terminal, capturing the first target image through the image capturing device.
The image capturing device can be arranged in the first terminal or externally connected with the first terminal.
In some embodiments, step S101 comprises: capturing the first target image by the image capturing device when the first terminal initiates a video connection to the second terminal. Generally, when a user initiates a video connection using a terminal, the terminal has turned on a camera to capture a face image of the user. Therefore, the present embodiment does not need to additionally acquire the first target image by taking the image captured by the image capturing apparatus as the first target image when the first terminal initiates the video connection to the second terminal.
In some embodiments, the method 100 further comprises:
step A1: determining a network connection state of the first terminal and/or the second terminal;
step A2: and if the network communication state meets a preset condition, sending the second target key point information.
For example, the preset condition may be that the wire speed of the first terminal and/or the second terminal is lower than a preset threshold. In this embodiment, by determining the network connection state of the current first terminal and/or the second terminal, the real-time video processing can be implemented by sending the second target key point information with extremely low data volume under the condition that the network connection state is poor.
It should be noted that, in the present embodiment, step a1 may be executed before any one of steps S101 to S105. It will be appreciated that different execution orders of step a1 will result in step B1 including different sub-steps. For example, but not limited to, step a1 may be executed before step S101, and when the network connection status is determined to satisfy the preset condition, steps S102-S105 are executed, that is, step a2 includes steps S102-S105; step a1 may also be executed after step S104 and before step S105, and step S105 is executed when the network connection status is determined to satisfy the preset condition, that is, step a2 is step S105.
In some embodiments, the method 100 further comprises:
step B1: determining a network connection state of the first terminal and/or the second terminal;
step B2: and if the network communication state does not meet the preset condition, sending image data to the second terminal.
Wherein the image data is generated from a resulting image captured by an image capture device. Illustratively, the image data may be video data compressed according to the h.264 or h.265 video coding standard.
It should be noted that, in the present embodiment, step B1 may be executed before or after any one of steps S101 to S105. For example, but not limited to, step B1 may be executed before step S101, and when the network connection status is determined not to satisfy the preset condition, step B1 is executed, and execution of steps S101-S105, that is, step a2 is stopped; step B1 may be executed after step S104 and before step S105, and when the network connection status is determined not to satisfy the preset condition, step B1 is executed to stop executing step S105.
Thus, according to one or more embodiments of the present disclosure, the first terminal may select to send the image information or the key point information according to the network connection state of the first terminal and/or the second terminal, so that the transmission data amount may be adjusted according to the network connection state to adapt to the current network connection state.
The video processing method provided according to one or more embodiments of the present disclosure is applied to a second terminal including, but not limited to, a mobile terminal and a fixed terminal such as a digital TV, a desktop computer, a notebook computer, a PAD (tablet), a smart watch, a mobile phone, a digital broadcast receiver, a PDA (personal digital assistant), a PMP (portable multimedia player), etc., which may transmit image data in a wired or wireless manner.
Referring to fig. 2, fig. 2 shows a flowchart of a video processing method 200 provided by an embodiment of the present disclosure, which includes steps S201 to S204:
step S201: a first target image is acquired.
The first target image is an image containing a target. In some embodiments, the target image is a face image.
In some embodiments, the second terminal receives the first target image transmitted by the first terminal.
In some embodiments, the first target image may be stored in the second terminal in advance, or stored in the cloud server and sent to the second terminal by the server.
Step S202: second target keypoint information is received.
Step S203: and processing the first target image based on the second target key point information to obtain a second target simulation image.
Illustratively, the first target image may be processed using an MLS (Moving Least Squares) algorithm, or a mesh-Based Deformation algorithm (Gradient-Based Deformation) Based on a face motion parameter.
In some embodiments, the key point displacement information may be determined based on second target key point information and key point information corresponding to the first target image, and the first target image may be processed based on the key point displacement information to obtain a second target simulation image.
Step S204: and displaying the second target simulation image.
In this way, according to the video processing method provided by the embodiment of the disclosure, after the second terminal acquires the first target image, the second terminal only needs to receive the real-time target key point information with extremely low data volume, and can process the first target image according to the target key point information to obtain the simulated target image similar to the real-time target image, so that the real-time image of the target can be presented by using extremely low network bandwidth resources, and the display effect close to the video session can be realized under the condition of poor network environment.
For the above video processing method, fig. 3 shows a schematic diagram of a system provided according to an embodiment of the present disclosure. The first terminal 310 and the second terminal 320 may be in network communication directly or via at least one intermediate server.
Referring to fig. 4, fig. 4 shows a signal flow diagram of a system provided according to an embodiment of the present disclosure.
Step S411: the first terminal 310 initiates a video connection to the second terminal 320.
Step S412: the first terminal 310 captures a first target image through an image capturing device.
Step S413: the first terminal 310 determines whether the network connection status of the first terminal meets a preset condition.
In step S413, if the network connection status of the first terminal meets a preset condition, steps S414 to S417 are executed.
Step S414: the first terminal 310 sends the first target image to the second terminal 320; accordingly, the second terminal 320 receives the first target image in step S511.
Step S415: the first terminal 310 captures a second target image through an image capturing device.
Step S416: the first terminal 310 determines second target keypoint information from the second target image.
Step S417: the first terminal 310 sends the second target keypoint information to the second terminal 320, and returns to perform the steps S415-S417 in a loop.
Accordingly, in step S512, the second terminal 320 receives the second target keypoint information; next, the second terminal 320 performs steps S513 to 514.
Step S513: the second terminal 320 processes the first target image based on the second target key point information to obtain a second target simulation image.
Step S514: the second terminal 320 displays a second target simulation image.
In step S413, if the network connection status of the first terminal does not satisfy the preset condition, step S420 is executed.
Step S420: the first terminal 310 transmits video data to the second terminal 320. Where the video data is generated from successive images captured by the image capture or device of the first terminal 310.
Accordingly, the second terminal 320 receives the video data in step S520.
Step S521: the second terminal 320 displays a video according to the received video data.
Accordingly, as shown in fig. 5, the present disclosure provides a first terminal 600, including:
a first target image acquisition unit 610 for acquiring a first target image;
a first target image transmitting unit 620, configured to transmit the first target image to a second terminal;
a second target image acquiring unit 630 for acquiring a second target image;
a key point information determining unit 640, configured to determine second target key point information according to the second target image;
a key point information sending unit 650, configured to send the second target key point information to the second terminal, so that the second terminal processes the first target image based on the second target key point information to obtain a second target simulation image;
the first target image, the second target key point information and the second target simulation image all correspond to the same target.
For the embodiments of the apparatus, since they correspond substantially to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described apparatus embodiments are merely illustrative, in that modules illustrated as separate modules may or may not be separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
In this way, according to the first terminal provided by the embodiment of the disclosure, the second terminal processes the first target image based on the second target keypoint information by sending the first target image and the second target keypoint information to the second terminal to obtain the second target simulation image close to the second target image, so that real-time video processing can be realized with an extremely low data volume, and further, video conversation can be performed under a poor network environment.
In some embodiments, the first terminal 600 further comprises: a video capturing unit for capturing video data; the second target image obtaining unit 630 is configured to extract a second target image from the video data. Illustratively, the second target image is a certain image frame that may be a video.
In some embodiments, the first target image obtaining unit 610 is configured to extract a first target image from the video data. Illustratively, the first target image is a certain image frame that may be a video.
In some embodiments, the first target image obtaining unit 610 is configured to capture the first target image by the image capturing device when the first terminal establishes a video connection with the second terminal.
The image capturing device can be arranged in the first terminal or externally connected with the first terminal.
In some embodiments, the first target image obtaining unit 610 is configured to capture the first target image by the image capturing device when the first terminal initiates a video connection to the second terminal. Generally, when a user initiates a video connection using a terminal, the terminal has turned on a camera to capture a face image of the user. Therefore, the present embodiment does not need to additionally acquire the first target image by taking the image captured by the image capturing apparatus as the first target image when the first terminal initiates the video connection to the second terminal.
In some embodiments, the first terminal 600 further comprises: a network state determining unit, configured to determine a network connection state of the first terminal and/or the second terminal; the key point information sending unit 650 is configured to send the second target key point information if the network connectivity status meets a preset condition.
For example, the preset condition may be that the wire speed of the first terminal and/or the second terminal is lower than a preset threshold. In this embodiment, by determining the network connection state of the current first terminal and/or the second terminal, the real-time video processing can be implemented by sending the second target key point information with extremely low data volume under the condition that the network connection state is poor.
In some embodiments, the first terminal 600 further comprises: and the image data sending unit is used for sending the image data to the second terminal if the network communication state does not meet the preset condition.
Wherein the image data is generated from a resulting image captured by an image capture device. Illustratively, the image data may be video data compressed according to the h.264 or h.265 video coding standard.
Thus, according to one or more embodiments of the present disclosure, the first terminal may select to send the image information or the key point information according to the network connection state of the first terminal and/or the second terminal, so that the transmission data amount may be adjusted according to the network connection state to adapt to the current network connection state.
Accordingly, as shown in fig. 6, the present disclosure provides a second terminal 700, including:
a first image acquisition unit 710 for acquiring a first target image;
a key point information receiving unit 720, configured to receive second target key point information;
the image processing unit 730 is configured to process the first target image based on the second target key point information to obtain a second target simulation image;
and the second target key point information, the first target image and the second target simulation image correspond to the same target.
A display unit 740 for displaying the second target simulation image;
for the embodiments of the apparatus, since they correspond substantially to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described apparatus embodiments are merely illustrative, in that modules illustrated as separate modules may or may not be separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
In this way, according to the second terminal provided by the embodiment of the disclosure, after the first target image is acquired, only the real-time target key point information with an extremely low data volume needs to be received, and the first target image can be processed according to the target key point information, so as to obtain the simulated target image similar to the real-time target image, so that the real-time image of the target can be presented by using an extremely low network bandwidth resource, and further, the display effect of the simulated video session can be realized under the condition of a relatively poor network environment.
In some embodiments, the image processing unit 730 is configured to determine key point displacement information based on the second target key point information and the key point information corresponding to the first target image, and process the first target image based on the key point displacement information to obtain a second target simulation image.
Accordingly, in accordance with one or more embodiments of the present disclosure, there is provided a system, characterized in that the system comprises:
a first terminal as provided in accordance with one or more embodiments of the present disclosure; and
such as a second terminal provided in accordance with one or more embodiments of the present disclosure.
Accordingly, in accordance with one or more embodiments of the present disclosure, there is provided an electronic device including:
at least one memory and at least one processor;
wherein the memory is used for storing program codes, and the processor is used for calling the program codes stored in the memory to execute the video processing method provided by one or more embodiments of the present disclosure.
Accordingly, according to one or more embodiments of the present disclosure, there is provided a non-transitory computer storage medium storing program code executable by a computer device to cause the computer device to perform a video processing method provided according to one or more embodiments of the present disclosure.
Fig. 7 shows a schematic structural diagram of a terminal device 800 (e.g., the first terminal shown in fig. 3) for implementing an embodiment of the disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), and the like, and fixed terminals such as a digital TV, a desktop computer, and the like. The terminal device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, the terminal device 800 may include a processing means (e.g., a central processing unit, a graphic processor, etc.) 801 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the terminal apparatus 800 are also stored. The processing apparatus 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage 808 including, for example, magnetic tape, hard disk, etc.; and a communication device 809. For example, the storage 808 may store a first database and a second database, wherein the first database stores at least one first sub-program identifier of a first program; the second database stores at least one second sub-program identification of the first program. The communication means 809 may allow the terminal apparatus 800 to perform wireless or wired communication with other apparatuses to exchange data. While fig. 7 illustrates a terminal apparatus 800 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for executing an apparatus illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 809, or installed from the storage means 808, or installed from the ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the apparatus of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be included in the terminal device; or may exist separately without being assembled into the terminal device.
The computer readable medium carries one or more programs which, when executed by the terminal device, cause the terminal device to: acquiring a first target image; sending the first target image to a second terminal; acquiring a second target image; determining second target key point information according to the second target image; sending the second target key point information to the second terminal, so that the second terminal processes the first target image based on the second target key point information to obtain a second target simulation image; the first target image, the second target key point information and the second target simulation image all correspond to the same target.
Alternatively, the computer readable medium carries one or more programs which, when executed by the terminal device, cause the terminal device to: acquiring a first target image; receiving second target key point information; processing the first target image based on the second target key point information to obtain a second target simulation image; displaying the second target simulation image; and the second target key point information, the first target image and the second target simulation image correspond to the same target.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, apparatuses, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Here, the name of the unit does not constitute a limitation of the unit itself in some cases, and for example, the instruction unit may be described as "a unit for receiving a first operation instruction".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided a video processing method applied to a first terminal, including: acquiring a first target image; sending the first target image to a second terminal; acquiring a second target image; determining second target key point information according to the second target image; sending the second target key point information to the second terminal, so that the second terminal processes the first target image based on the second target key point information to obtain a second target simulation image; the first target image, the second target key point information and the second target simulation image all correspond to the same target.
According to one or more embodiments of the present disclosure, a video processing method is provided, which further includes: acquiring video data by an image capture device; the acquiring of the second target image comprises: a second target image frame is extracted from the video data.
According to one or more embodiments of the present disclosure, the acquiring a first target image includes: and when the first terminal establishes a video connection with the second terminal, capturing the first target image through the image capturing device.
According to one or more embodiments of the present disclosure, a video processing method is provided, which further includes: determining a network connection state of the first terminal and/or the second terminal; and if the network communication state meets a preset condition, sending the second target key point information.
According to one or more embodiments of the present disclosure, a video processing method is provided, which further includes: and if the network communication state does not meet the preset condition, sending image data to the second terminal.
According to one or more embodiments of the present disclosure, there is provided a video processing method applied to a second terminal, including: acquiring a first target image; receiving second target key point information; processing the first target image based on the second target key point information to obtain a second target simulation image; displaying the second target simulation image; and the first target image information, the second target key point and the second target simulation image correspond to the same target.
According to one or more embodiments of the present disclosure, the processing the first target image based on the second target keypoint information to obtain a second simulated target image includes: determining key point displacement information based on the second target key point information and the key point information corresponding to the first target image; and processing the first target image based on the key point displacement information to obtain a second target simulation image.
According to one or more embodiments of the present disclosure, there is provided a first terminal including: a first target image acquisition unit configured to acquire a first target image; the first target image sending unit is used for sending the first target image to a second terminal; a second target image acquisition unit for acquiring a second target image; the key point information determining unit is used for determining second target key point information according to the second target image; a key point information sending unit, configured to send the second target key point information to the second terminal, so that the second terminal processes the first target image based on the second target key point information to obtain a second target simulation image; the first target image, the second target key point information and the second target simulation image all correspond to the same target.
According to one or more embodiments of the present disclosure, there is provided a second terminal including: a first image acquisition unit for acquiring a first target image; a key point information receiving unit, configured to receive second target key point information; the image processing unit is used for processing the first target image based on the second target key point information to obtain a second target simulation image; a display unit for displaying the second target simulation image; and the first target image, the second target key point information and the second target simulation image correspond to the same target.
According to one or more embodiments of the present disclosure, there is provided a video processing system including: a first terminal provided in accordance with one or more embodiments of the present disclosure; and a second terminal provided in accordance with one or more embodiments of the present disclosure.
According to one or more embodiments of the present disclosure, there is provided a terminal including: at least one memory and at least one processor; wherein the memory is configured to store program code, and the processor is configured to call the program code stored in the memory to perform a video processing method provided according to one or more embodiments of the present disclosure.
A non-transitory computer storage medium storing program code executable by a computer device to cause the computer device to perform a video processing method provided according to one or more embodiments of the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or logical acts of devices, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (12)

1. A video processing method applied to a first terminal is characterized by comprising the following steps:
acquiring a first target image;
sending the first target image to a second terminal;
acquiring a second target image;
determining second target key point information according to the second target image;
sending the second target key point information to the second terminal, so that the second terminal processes the first target image based on the second target key point information to obtain a second target simulation image;
the first target image, the second target key point information and the second target simulation image all correspond to the same target.
2. The video processing method of claim 1, further comprising:
acquiring video data by an image capture device;
the acquiring of the second target image comprises: a second target image is extracted from the video data.
3. The video processing method of claim 1, wherein said obtaining a first target image comprises:
and when the first terminal establishes a video connection with the second terminal, capturing the first target image through an image capturing device.
4. The video processing method of claim 1, further comprising:
determining a network connection state of the first terminal and/or the second terminal;
and if the network communication state meets a preset condition, sending the second target key point information.
5. The video processing method of claim 4, further comprising:
and if the network communication state does not meet the preset condition, sending image data to the second terminal.
6. A video processing method applied to a second terminal is characterized by comprising the following steps:
acquiring a first target image;
receiving second target key point information;
processing the first target image based on the second target key point information to obtain a second target simulation image;
displaying the second target simulation image;
and the first target image information, the second target key point and the second target simulation image correspond to the same target.
7. The video processing method of claim 6, wherein said processing the first target image based on the second target keypoint information to obtain a second simulated target image comprises:
determining key point displacement information based on the second target key point information and the key point information corresponding to the first target image;
and processing the first target image based on the key point displacement information to obtain a second target simulation image.
8. A first terminal, comprising:
a first target image acquisition unit configured to acquire a first target image;
the first target image sending unit is used for sending the first target image to a second terminal;
a second target image acquisition unit for acquiring a second target image;
the key point information determining unit is used for determining second target key point information according to the second target image;
a key point information sending unit, configured to send the second target key point information to the second terminal, so that the second terminal processes the first target image based on the second target key point information to obtain a second target simulation image;
the first target image, the second target key point information and the second target simulation image all correspond to the same target.
9. A second terminal, comprising:
a first image acquisition unit for acquiring a first target image;
a key point information receiving unit, configured to receive second target key point information;
the image processing unit is used for processing the first target image based on the second target key point information to obtain a second target simulation image;
a display unit for displaying the second target simulation image;
and the first target image, the second target key point information and the second target simulation image correspond to the same target.
10. A video processing system, comprising:
the first terminal of claim 7; and
the second terminal of claim 8.
11. A terminal, comprising:
at least one memory and at least one processor;
wherein the memory is configured to store program code and the processor is configured to call the program code stored in the memory to perform the method of any of claims 1 to 7.
12. A non-transitory computer storage medium storing program code executable by a computer device to cause the computer device to perform the method of any one of claims 1 to 7.
CN202011090106.7A 2020-10-13 2020-10-13 Video processing method, system, terminal and storage medium Pending CN112218034A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011090106.7A CN112218034A (en) 2020-10-13 2020-10-13 Video processing method, system, terminal and storage medium
PCT/CN2021/113971 WO2022078066A1 (en) 2020-10-13 2021-08-23 Video processing method and system, terminal, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011090106.7A CN112218034A (en) 2020-10-13 2020-10-13 Video processing method, system, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN112218034A true CN112218034A (en) 2021-01-12

Family

ID=74053773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011090106.7A Pending CN112218034A (en) 2020-10-13 2020-10-13 Video processing method, system, terminal and storage medium

Country Status (2)

Country Link
CN (1) CN112218034A (en)
WO (1) WO2022078066A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022078066A1 (en) * 2020-10-13 2022-04-21 北京字节跳动网络技术有限公司 Video processing method and system, terminal, and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115225542A (en) * 2022-07-20 2022-10-21 北京京东乾石科技有限公司 Video information processing method and device, electronic equipment and storage medium
CN117041231A (en) * 2023-07-11 2023-11-10 启朔(深圳)科技有限公司 Video transmission method, system, storage medium and device for online conference

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040218827A1 (en) * 2003-05-02 2004-11-04 Michael Cohen System and method for low bandwidth video streaming for face-to-face teleconferencing
CN102271241A (en) * 2011-09-02 2011-12-07 北京邮电大学 Image communication method and system based on facial expression/action recognition
CN103647922A (en) * 2013-12-20 2014-03-19 百度在线网络技术(北京)有限公司 Virtual video call method and terminals
CN108174141A (en) * 2017-11-30 2018-06-15 维沃移动通信有限公司 A kind of method of video communication and a kind of mobile device
CN108985241A (en) * 2018-07-23 2018-12-11 腾讯科技(深圳)有限公司 Image processing method, device, computer equipment and storage medium
CN110536095A (en) * 2019-08-30 2019-12-03 Oppo广东移动通信有限公司 Call method, device, terminal and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101460130B1 (en) * 2007-12-11 2014-11-10 삼성전자주식회사 A method of video communication in a portable terminal and an apparatus thereof
CN112218034A (en) * 2020-10-13 2021-01-12 北京字节跳动网络技术有限公司 Video processing method, system, terminal and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040218827A1 (en) * 2003-05-02 2004-11-04 Michael Cohen System and method for low bandwidth video streaming for face-to-face teleconferencing
CN102271241A (en) * 2011-09-02 2011-12-07 北京邮电大学 Image communication method and system based on facial expression/action recognition
CN103647922A (en) * 2013-12-20 2014-03-19 百度在线网络技术(北京)有限公司 Virtual video call method and terminals
CN108174141A (en) * 2017-11-30 2018-06-15 维沃移动通信有限公司 A kind of method of video communication and a kind of mobile device
CN108985241A (en) * 2018-07-23 2018-12-11 腾讯科技(深圳)有限公司 Image processing method, device, computer equipment and storage medium
CN110536095A (en) * 2019-08-30 2019-12-03 Oppo广东移动通信有限公司 Call method, device, terminal and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022078066A1 (en) * 2020-10-13 2022-04-21 北京字节跳动网络技术有限公司 Video processing method and system, terminal, and storage medium

Also Published As

Publication number Publication date
WO2022078066A1 (en) 2022-04-21

Similar Documents

Publication Publication Date Title
CN112218034A (en) Video processing method, system, terminal and storage medium
CN113542902B (en) Video processing method and device, electronic equipment and storage medium
CN112291316B (en) Connection processing method and device, electronic equipment and computer readable storage medium
CN111459364B (en) Icon updating method and device and electronic equipment
CN112199174A (en) Message sending control method and device, electronic equipment and computer readable storage medium
CN111935442A (en) Information display method and device and electronic equipment
CN114095671A (en) Cloud conference live broadcast system, method, device, equipment and medium
CN113038176B (en) Video frame extraction method and device and electronic equipment
CN112269770B (en) Document sharing method, device and system and electronic equipment
CN110083768B (en) Information sharing method, device, equipment and medium
CN109189822B (en) Data processing method and device
CN113596328B (en) Camera calling method and device and electronic equipment
CN112203103B (en) Message processing method, device, electronic equipment and computer readable storage medium
CN113709573B (en) Method, device, equipment and storage medium for configuring video special effects
CN114187169A (en) Method, device and equipment for generating video special effect package and storage medium
CN115378878A (en) CDN scheduling method, device, equipment and storage medium
CN112346661A (en) Data processing method and device and electronic equipment
CN112162682A (en) Content display method and device, electronic equipment and computer readable storage medium
CN112040328A (en) Data interaction method and device and electronic equipment
CN114125485B (en) Image processing method, device, equipment and medium
CN110991312A (en) Method, apparatus, electronic device, and medium for generating detection information
CN112804457B (en) Photographing parameter determination method and device and electronic equipment
CN111105345B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111131305A (en) Video communication method and device and VR equipment
CN115268739A (en) Control method, control device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210112