CN114466218A - Live video character tracking method, device, equipment and storage medium - Google Patents

Live video character tracking method, device, equipment and storage medium Download PDF

Info

Publication number
CN114466218A
CN114466218A CN202210150699.4A CN202210150699A CN114466218A CN 114466218 A CN114466218 A CN 114466218A CN 202210150699 A CN202210150699 A CN 202210150699A CN 114466218 A CN114466218 A CN 114466218A
Authority
CN
China
Prior art keywords
human body
target person
video frame
target
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210150699.4A
Other languages
Chinese (zh)
Other versions
CN114466218B (en
Inventor
宫凯程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN202210150699.4A priority Critical patent/CN114466218B/en
Priority claimed from CN202210150699.4A external-priority patent/CN114466218B/en
Publication of CN114466218A publication Critical patent/CN114466218A/en
Application granted granted Critical
Publication of CN114466218B publication Critical patent/CN114466218B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Abstract

The application relates to a live video character tracking method, a live video character tracking device, live video character tracking equipment and a live video character tracking storage medium, and belongs to the technical field of network live broadcasting. The moving condition of the human key parts of the target person is determined according to the inclination angle information of the human key parts of the target person, when the inclination angle information of the human key parts of the target person is within a preset moving angle range, the display position of the first target person identification in the next video frame is determined directly according to the target person identification information of the first video frame, the first target person identification is displayed in the next video frame, human detection is not needed to be carried out on each video frame in live video, and the live video target person tracking efficiency is improved.

Description

Live video character tracking method, device, equipment and storage medium
Technical Field
The present application relates to the field of live webcasting technologies, and in particular, to a live video character tracking method, apparatus, device, and storage medium.
Background
With the development of internet technology, watching the live broadcast video of the main broadcast in the live broadcast room gradually becomes the daily entertainment activities of people. The live broadcast platform is used as a medium between the anchor and audiences, the anchor uploads live broadcast video data to the live broadcast platform through equipment such as a camera, and then the live broadcast platform sends the client of the audiences to play and watch the live broadcast video data.
In the live broadcasting process, the anchor can add a corresponding special effect (such as local deformation) to the character image displayed on the live broadcasting picture to improve the watching experience of audiences, however, in the process of adding the special effect, human body detection is often required to be carried out on each video frame, and then the displayed character image is correspondingly adjusted according to the human body detection result of each frame, so that the character tracking efficiency is low.
Disclosure of Invention
Based on this, an object of the present application is to provide a live video character tracking method, apparatus, device, and storage medium, and provide a live video character tracking method, which can improve live video character tracking efficiency.
According to a first aspect of an embodiment of the present application, a live video person tracking method is provided, where the live video person tracking method includes:
acquiring target figure identification information and a first target figure identifier of a first video frame; the target person identification information is used for determining the position of a target person in a first video frame;
acquiring a next video frame adjacent to the first video frame;
acquiring position information of human body outline key points of human body key parts of a target person in the first video frame; wherein, the key parts of the human body are the parts of the human body with the freedom of movement higher than a preset threshold;
acquiring the inclination angle information of the human body key part of the target character according to the position information of the human body contour key points of the human body key part;
and if the inclination angle information of the key human body part of the target person is within a preset activity angle range, determining the display position of a first target person identifier in a next video frame according to the target person identification information of the first video frame, and displaying the first target person identifier in the next video frame.
According to a second aspect of embodiments of the present application, there is provided a live video person tracking apparatus, the apparatus comprising:
the identification information acquisition module is used for acquiring the identification information of the target person and the first target person identification of the first video frame; the target person identification information is used for determining the position of a target person in a first video frame;
the next video frame acquisition module is used for acquiring a next video frame adjacent to the first video frame;
the position information acquisition module is used for acquiring the position information of the human body outline key points of the human body key parts of the target person in the first video frame; wherein, the key parts of the human body are the parts of the human body with the freedom of movement higher than a preset threshold;
the angle information acquisition module is used for acquiring the inclination angle information of the human body key part of the target person according to the position information of the human body contour key points of the human body key part;
and the identification module is used for determining the display position of the first target character identification in the next video frame according to the target character identification information of the first video frame if the inclination angle information of the human body key part of the target character is within a preset activity angle range, and displaying the first target character identification in the next video frame.
Identification information acquisition module according to a third aspect of embodiments of the present application, there is provided an electronic device including: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform any of the live video person tracking methods.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements any one of the live video person tracking methods.
In the application, the moving condition of the key parts of the human body of the target person is determined according to the inclination angle information of the key parts of the human body of the target person, when the inclination angle information of the key parts of the human body of the target person is within a preset moving angle range, the display position of the first target person identification in the next video frame is determined directly according to the target person identification information of the first video frame, the first target person identification is displayed at the corresponding position in the next video frame, human body detection is not required to be carried out on each video frame in live video, and the live video target person tracking efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
For a better understanding and practice, the present application is described in detail below with reference to the accompanying drawings.
Drawings
Fig. 1 is a schematic diagram of an application environment of a live video person tracking method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of live video person tracking provided by an embodiment of the present application;
fig. 3 is a flowchart of a live video person tracking method according to an embodiment of the present application;
FIG. 4 is an exemplary diagram of a body contour sampling point provided in one embodiment of the present application;
fig. 5 is a flowchart of a live video person tracking method according to another embodiment of the present application;
fig. 6 is a flowchart of a live video person tracking method according to another embodiment of the present application;
fig. 7 is a schematic diagram illustrating obtaining tilt angle information of key parts of a human body of a target person according to an embodiment of the present application;
FIG. 8 is a schematic diagram of live video person tracking provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of live video character tracking provided by an embodiment of the present application;
fig. 10 is a schematic structural diagram of a live video person tracking apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that the embodiments described are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
In the description of the present application, it is to be understood that the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not necessarily used to describe a particular order or sequence, nor are they to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as the case may be. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The word "if/if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination". Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
As will be appreciated by those skilled in the art, the terms "client," "terminal device," and "terminal device" as used herein include both wireless signal receiver devices, which include only wireless signal receiver devices without transmit capability, and receiving and transmitting hardware devices, which include receiving and transmitting hardware devices capable of two-way communication over a two-way communication link. Such a device may include: cellular or other communication devices such as personal computers, tablets, etc. having single or multi-line displays or cellular or other communication devices without multi-line displays; PCS (personal communications Service), which may combine voice, data processing, facsimile and/or data communications capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global positioning system) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "client," "terminal device" can be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. The "client", "terminal Device" used herein may also be a communication terminal, a web terminal, a music/video playing terminal, such as a PDA, an MID (Mobile Internet Device) and/or a Mobile phone with music/video playing function, and may also be a smart tv, a set-top box, and the like.
The hardware referred to by the names "server", "client", "service node", etc. is essentially a computer device with the performance of a personal computer, and is a hardware device having necessary components disclosed by the von neumann principle, such as a central processing unit (including an arithmetic unit and a controller), a memory, an input device, an output device, etc., wherein a computer program is stored in the memory, and the central processing unit loads a program stored in an external memory into the internal memory to run, executes instructions in the program, and interacts with the input and output devices, thereby accomplishing specific functions.
It should be noted that the concept of "server" as referred to in this application can be extended to the case of a server cluster. According to the network deployment principle understood by those skilled in the art, the servers should be logically divided, and in physical space, the servers may be independent from each other but can be called through an interface, or may be integrated into one physical computer or a set of computer clusters. Those skilled in the art will appreciate this variation and should not be so limited as to restrict the implementation of the network deployment of the present application.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of a live video character tracking method according to an embodiment of the present application, where the application scenario includes an anchor terminal 20, a server terminal 10, and a viewer terminal 30 according to an embodiment of the present application. The anchor side 20 interacts with the viewer side 30 via the server side 10.
The anchor terminal 20 is a terminal that transmits a live video, and is typically a viewer terminal used by an anchor in live webcasting.
The viewer side 30 is a side that receives and views the webcast video, and is typically a viewer side used by viewers viewing the video in the webcast.
The hardware at which the anchor side 20 and the spectator side 30 are directed is essentially a computer device, and in particular, as shown in fig. 1, it may be a computer device of the type of a smart phone, a smart interactive tablet, a personal computer, etc. Both the anchor terminal 20 and the viewer terminal 30 can access the internet through a well-known network access manner to establish a data communication link with the server terminal 10.
The server 10 is a service server, and may be responsible for further connecting with related audio data servers, video streaming servers, and other servers providing related support, etc., so as to form a logically associated server cluster for providing services to related terminal devices, such as the anchor terminal 20 and the viewer terminal 30 shown in fig. 1.
In the live broadcast room, interaction between the anchor and the audience can be realized through known online interaction modes such as voice, video, characters and the like, generally, the anchor user performs programs for the audience in the form of audio and video streams, and economic transaction behaviors can also be generated in the interaction process. Of course, the live video character tracking method of the embodiment of the present application may also be popularized to other related scenes, for example: short videos, and any other scene that requires tracking of a target object in the video.
Specifically, the viewer watches live broadcast as follows: the viewer can click to access a live application (e.g., YY) installed on the viewer side 30, and choose to enter any one of the live rooms, and trigger the viewer side 30 to load a live room interface for the viewer, wherein the live room interface includes a plurality of interactive components, and the viewer can watch live in the live room by loading the interactive components, and perform various online interactions.
In the live broadcasting process, the anchor may add a corresponding special effect (for example, local deformation such as slimming) to a character displayed in a live broadcasting picture to improve the experience of a viewer watching the live broadcasting, but if the special effect is to be added to a target character in a live broadcasting room interface, since the target character may move at any time in the live broadcasting room, the position of the target character needs to be detected and tracked in real time, and when the target character is tracked, the target character may be selected in the live broadcasting picture by using a character detection frame 401 as shown in fig. 2 to identify the position of the target character in the live broadcasting room.
When a target person moves, the position of the target person often needs to be re-identified, and the person detection frame 401 is adopted to re-frame the target person to track the target person.
Therefore, referring to fig. 3, an embodiment of the present application provides a live video character tracking method, including the following steps:
s101: acquiring target figure identification information and a first target figure identifier of a first video frame; the target person identification information is used for determining the position of a target person in a first video frame;
in general, after a main broadcast end starts broadcasting, a live video is collected by using a camera of the main broadcast end or an external camera establishing data connection with the main broadcast end, and the live video comprises a plurality of frames of live pictures.
The first video frame may be a live video frame containing a live view of the target character.
In one embodiment, the first video frame may be a live picture of a current point in time; in another embodiment, the first video frame may also be a live view in which a target person appears first in a live video, and specifically, each live view in the live video may be detected by using a human body detection technique, the live view in which the target person appears first may be determined according to a time sequence of video frames in which the target person is detected, and the live view may be taken as the first video frame.
The target person identification information is used for determining the position of a target person in the first video frame; when the first video frame is a live video frame in a live video, the target person may be a main broadcast or a guest in a live room.
Specifically, the existing human body detection technology may be adopted to perform human body detection on the target person in the first video frame, so as to obtain the target person identification information of the first video frame.
The target person identification information may be area information of an area where the target person in the first video frame is located, for example, when the target person in the first video frame is detected by using an existing human body bounding box detection algorithm, the target person identification information may include information such as a size and a position of a generated human body bounding box.
The first target person identifier may be an identifier, such as a graphic or an icon, preset by the user and used for determining the position of the target person in the live broadcast room.
Alternatively, in one embodiment, the first target person identifier is an identifier such as a graphic or icon that can be generated from the target person identification information of the first video frame and that can cover the area in which the target person is located in the live broadcast room. For example, as shown in fig. 2, in the embodiment of the present application, the first target person identification may be a minimum bounding rectangular frame for enclosing all parts of the target person (including the torso and the head).
In one embodiment, when the first video frame is a live view including a target person, an identifier such as a graphic or an icon that can cover an area where the target person is located in a live view may be generated according to the target person identification information of the first video frame, and the corresponding target person identifier is displayed in the first video frame; for example, the target person identification may be a minimum bounding rectangular box used to enclose all parts of the target person, including the torso and head.
The size of the target person identifier and the position displayed in the first video frame may be set according to the target person identification information of the first video frame and the user requirement.
By displaying the corresponding target person identification on the video frame containing the target person in the live video, the target person identification can be used for realizing the dynamic tracking of the target person in the live video.
S102: acquiring a next video frame adjacent to the first video frame;
s103: acquiring position information of human body outline key points of human body key parts of a target person in the first video frame;
the existing human body detection technology is easy to have detection errors at parts of a human body, such as wrists, arms and the like, which are frequently moved, for example, the detection is easy to be inaccurate when the arms are horizontally extended, so that the generated human body detection frame is cut off at the parts of the wrists, the arms and the like, and the tracking accuracy of a target person is influenced.
Therefore, in order to solve the above problem, in the embodiment of the present application, the position information of the key portion of the human body of the target person is obtained, and the target person identifier display mode of the next video frame is further determined according to the position information of the key point of the human body contour of the key portion of the human body, so that the accuracy of tracking the target person is improved.
The human body contour key points are human body contour sampling points of human body key parts, wherein the human body contour sampling points can be obtained in a manual labeling mode according to the parts of a human body contained in the live broadcast image.
In a preferred embodiment, the body contour sampling points may be 64 sampling points around the body as shown in FIG. 4, involving various parts of the body.
The human body key part is a human body part with the activity freedom degree higher than a preset threshold value, and can be determined according to the daily activity condition of the human body.
Wherein, the more frequent the key parts of the human body move, the higher the corresponding freedom of movement; for example, the key parts of the human body can be parts which are frequently moved, such as wrists, arms and the like.
Specifically, the key part of the human body can be a wrist or a forearm. As shown in fig. 4, when the key part of the human body is a forearm, the human body contour key points may include human body contour sampling points [3, 4, 5, 6, 8, 9, 10, 11] of the left-hand forearm part and human body contour sampling points [47, 48, 49, 50, 52, 53, 54, 54] of the right-hand forearm part.
In one embodiment, the position information of the key points of the human body contour can be obtained by using the existing contour point detection algorithm to identify the key points of the human body contour of the key parts of the target person in the first video frame.
Preferably, in the embodiment of the present application, an area where a target person is located is determined based on the target person identification information of the first video frame, the area where the target person is located is detected by using the human contour point detection model, position information of all human contour sampling points is obtained, and then position information of key points of the human contour is further obtained. By determining the region where the target person is located and then acquiring the human body contour sampling points and the human body contour key points, the positioning precision of the human body contour key points can be effectively improved, and the tracking accuracy of the target person is improved.
S104: acquiring the inclination angle information of the human body key part of the target character according to the position information of the human body contour key points of the human body key part;
the inclination angle of the human body key part can be an included angle formed by a line segment constructed according to the human body contour key points and a horizontal line, wherein the number of the human body contour key points is at least two, and the line segment constructed according to the human body contour key points can be used for determining the motion direction of the human body key part.
The method comprises the steps of obtaining the inclination angle information of the key human body part of the target person, constructing a line for representing the motion direction of the key human body part according to the position information of key human body contour points, and solving an included angle formed by the line and a horizontal line based on a trigonometric function principle to obtain the inclination angle information of the key human body part of the target person.
Specifically, the step of obtaining the information of the inclination angle of the key part of the human body of the target person comprises the following steps:
constructing a first line for representing the extending direction of the key parts of the human body according to the position information of the key points of the at least two human body contours;
and acquiring an included angle formed by the first line and a horizontal line.
In one embodiment, when constructing the first line for representing the extending direction of the key parts of the human body, the first line can be obtained by directly connecting key points of the human body contour.
If the first line is obtained by directly connecting the key points of the human body contour, the first end point and the second end point of the first line are the position information of the two key points of the human body contour of the connecting line, and specifically, the step of obtaining the included angle formed by the first line and the horizontal line comprises the following steps:
acquiring position information of a first end point and position information of a second end point of a first line;
θ0=arctan(abs(y2-y1)/abs(x2-x1))
wherein, theta0The included angle formed by the first line and the horizontal line is (x1, y1) the position information of the first endpoint, and (x2, y2) the position information of the second endpoint.
In one embodiment, when the key points of the human body contour are a plurality of human body contour sampling points on the key part, a plurality of lines can be constructed according to the plurality of key points of the human body contour, the included angle between each line and the horizontal line is respectively calculated, and the included angle formed by the first line and the horizontal line is obtained by adding or averaging the included angles between each line and the horizontal line.
Specifically, in this embodiment of the application, when the number of the key points of the human body contour is at least three, the step of obtaining the information of the inclination angle of the key part of the human body of the target person includes:
constructing a plurality of branch lines according to the position information of every two adjacent human body contour key points;
and obtaining the included angle between each branching strip and the horizontal line, and adding the included angles to obtain the included angle formed by the first strip and the horizontal line.
The line-dividing strips can be connecting lines between every two adjacent human body contour key points, wherein the included angle between each line-dividing strip and the horizontal line can be obtained according to the formula, and the description is omitted here.
Because the human body contour key points in the scheme are sampling points of the contour of the human body key parts, the human body contour sampling points generally comprise human body contour sampling points at two sides of the key parts (such as an upper side human body contour sampling point [3, 4, 5, 6] and a lower side human body contour sampling point [8, 9, 10, 11] of a left arm in fig. 4), the contour sampling points at two sides of the key parts may have certain deviation along with the trend of muscles, and if a first line is constructed by simply using the human body contour sampling points at one side, the judgment of the extension direction of the human body key parts can be influenced; if lines are constructed for the human body contour sampling points on the two sides of the key part and the included angle formed by the lines and the horizontal line is calculated, the calculated data volume is large.
Therefore, in a preferred embodiment, the number of the human body contour key points is at least four, and every two human body contour key points are oppositely arranged, the first end point and the second end point of the first line are the middle points of one group of the two human body contour key points which are oppositely arranged, and the second end point of the first line is the middle point of the other group of the two human body contour key points which are oppositely arranged.
The step of constructing a first line for representing the extension direction of the key part of the human body specifically comprises the following steps:
and acquiring position information of the middle points of the key points of the human body contour which are arranged oppositely, and constructing a first line for representing the extending direction of the key parts of the human body according to the position information of the middle points of the key points of the human body contour which are arranged oppositely.
The middle points of the human body contour key points on the two sides of the key part, which are oppositely arranged, are connected to obtain the first line representing the extending direction of the human body key part, so that the situation that the judgment of the extending direction of the human body key part is influenced due to the inconsistent trend of the human body contour sampling points on the two sides of the key part is avoided, and the tracking accuracy of live video figures can be effectively improved.
S105: and if the inclination angle information of the key human body part of the target person is within a preset activity angle range, determining the display position of a first target person identifier in a next video frame according to the target person identification information of the first video frame, and displaying the first target person identifier in the next video frame.
The range of the movable angle can be set according to the actual requirement of a user; for example, the active angle range may be set to be less than some preset included angle threshold, such as 60 °.
When the included angle formed by the first line and the horizontal line is smaller than or equal to a preset included angle threshold value, determining that the inclination angle information of the human body key part of the target character is in a preset activity angle range, determining the display position of a first target character identifier in the next video frame according to the target character identification information of the first video frame, and displaying the first target character identifier; and if the included angle formed by the first line and the horizontal line is larger than a preset included angle threshold value, determining that the movement track information of the target person exceeds a preset activity angle range.
The determining of the display position of the first target person identifier in the next video frame according to the target person identification information of the first video frame may be determining a position to be surrounded by the first target person identifier in the next video frame according to the target person identification information, so as to determine the display position of the first target person identifier in the next video frame.
The human body key part of the target person in the embodiment of the application is a human body part with high freedom degree of movement, the movement condition of the target person is determined according to the movement condition of the human body key part of the target person, whether the display position of the first target person identification of the next video frame can be determined according to the target person identification information of the first video frame or not is determined according to the movement condition of the target person, human body detection on the next video frame is not needed, and the human body contour detection frequency is reduced.
In an alternative embodiment, the execution subject of the live video person tracking method may be an anchor terminal, and in another alternative embodiment, the execution subject of the live video person tracking method may also be a server.
The moving condition of the human key parts of the target person is determined according to the inclination angle information of the human key parts of the target person, when the inclination angle information of the human key parts of the target person is within a preset moving angle range, the first target person identification is displayed at the corresponding position in the next video frame directly according to the target person identification information of the first video frame, human detection on each video frame in live video is not needed, and the live video target person tracking efficiency is improved.
For example, when the key part is a left arm, the extension angle of the left arm is determined according to the position information of the key point of the human body contour on the left arm, whether the extension condition of the left arm of the target person exceeds the range which can be covered by the original first target person identifier is determined according to the extension angle of the left arm, if the extension condition of the left arm of the target person is still within the range which can be covered by the first target person identifier, the display position of the first target person identifier in the next video frame is determined directly according to the target person identification information of the first video frame, and the first target person identifier is displayed at the corresponding position in the next video frame.
When the inclination angle information of the key parts of the human body of the target person exceeds the preset activity angle range, the target person identification in the next video frame needs to be adjusted, so that the target person identification displayed in the next video frame can cover the position of the moved target person.
The live video character tracking method can be integrated into a human body special effect product (for example, an electronic device configured with a live client, a video playing client or a video image processing (for example, slimming, leg lengthening and the like) client) along with a human body segmentation function, or the follow-up action recognition function is integrated into an application product of action recognition (for example, a live platform can capture the action of a main broadcast user and is used for adding action special effects and the like), so that the watching interest and the user experience of live broadcast are improved.
As shown in fig. 5, in an optional embodiment, the live video person tracking method further includes the following steps:
s201: if the inclination angle information of the key human body part of the target person exceeds a preset activity angle range, acquiring position information of the position of the target person in the next video frame and a second target person identifier;
the position information of the position of the target person in the next video frame can be obtained by using the existing human body detection technology.
The second target person identifier may be an identifier, such as a graphic or an icon, preset by the user and used for determining the position of the target person in the live broadcast room.
It should be noted that, in the embodiment of the present application, the second target person identifier is different from the first target person identifier.
Wherein the second target person identifier may be a different graphic or icon than the first target person identifier, or the second target person identifier may be the same graphic or icon as the first target person identifier, but the second target person identifier may be a different size than the first target person identifier;
s202: and displaying the second target person identification at the position of the target person in the next video frame according to the position information of the position of the target person in the next video frame.
In the embodiment, the position of the target person in the next video frame is obtained by using a human body detection technology, and a preset second target person identifier is displayed at the corresponding position of the next video frame, so that the target person in the live video is tracked.
In the existing human body detection technology, detection abnormity is easy to occur when key parts such as wrists, arms and the like of a human body are detected, and if a target person is directly identified in a live video by using a human body detection frame according to a human body detection result, truncation often occurs at the key parts such as wrists, arms and the like of the human body, so that subsequent processing on the live video is influenced.
Therefore, in view of the above problem, as shown in fig. 6, in a preferred embodiment, the live video person tracking method further includes the following steps:
s301: if the inclination angle information of the key human body part of the target character exceeds a preset activity angle range, acquiring a second target character identifier;
s302: and determining the display position of a second target person identifier in the next video frame according to the target person identification information of the first video frame, and displaying the second target person identifier in the next video frame.
In this embodiment, the display position of the second target personal identification is determined based on the target personal identification information of the first video frame, and when the first target personal identification is displayed in the first video frame and the display position of the first target personal identification is also determined based on the target personal identification information of the first video frame, the display position of the second target personal identification may be the same as the display position of the first target personal identification.
When the inclination angle information of the key human body part of the target person exceeds the preset activity angle range, the display position of a second target person identifier in the next video frame is determined according to the target person identification information of the first video frame, and different second target person identifiers are displayed at the corresponding positions in the next video frame.
The second target person identifier may cover the area of the target person in the live broadcast room, for example, the second target person identifier may be a minimum bounding rectangle for bounding all parts of the target person (including the torso and the head). When the target person moves greatly, the second target person identification with larger or smaller size can be obtained to realize the tracking of the target person in the live video.
Optionally, the step of obtaining the second target person identifier includes:
determining the moving direction information of the target person according to the inclination angle information of the key part of the human body of the target person;
and stretching the first target character identification along the moving direction of the target character according to a preset size adjustment rule and the moving direction information of the target character to obtain a second target character identification.
In this embodiment, the second target personal identification may be the same graphic or icon as the first target personal identification, however, the second target personal identification may be a different size than the first target personal identification.
Further, the second target person identifier and the first target person identifier are a human body detection frame which is a minimum bounding rectangular frame for bounding all parts (including the trunk and the head) of the target person.
The preset resizing rule may be used to determine the stretching dimension of the first target person identifier, for example, when the first target person identifier is a human body detection frame, the preset resizing rule may be used to determine the outward expansion length of the human body detection frame.
For example, when the key part of the human body is an arm, the outward expansion length can be set to be a target multiple of the outward expansion width of the human body detection frame, and the target multiple can be set according to the actual application requirements, for example, the target multiple can be 0.1.
When the inclination angle information of the human body key part of the target character exceeds a preset activity angle range, determining the moving direction information of the target character according to the specific human body key part exceeding the preset activity angle range; for example, when the tilt angle information of the left arm exceeds the preset active angle range, it is determined that the target person performs the left arm horizontal extension action.
It is understood that, after the target person performs the actions such as horizontal left arm extension, the original first target person mark cannot cover the left arm after horizontal extension, and therefore, the first target person mark needs to be stretched in the left arm moving direction so as to cover the left arm after horizontal extension.
The following specifically describes a live video character tracking method in the embodiment of the present application, with an arm as a key part of a human body:
as shown in fig. 7, the body contour key points of the left-hand arm portion include body contour sampling points [3, 4, 5, 6] on the upper side and body contour sampling points [8, 9, 10, 11] on the lower side.
And acquiring the middle points of every two oppositely arranged key points of the human body contour, constructing lines representing the stretching direction of the left arm according to the middle points, such as the middle points of [3, 11] and the middle points of [4, 10], constructing a branching bar L0, similarly constructing a branching bar L1 according to the middle points of [5, 9] and the middle points of [4, 10], and constructing a branching bar L2 according to the middle points of [5, 9] and the middle points of [6, 8 ].
Based on the trigonometric function, the included angles formed by the branch bars L0-L2 and the horizontal line are respectively calculated.
Taking the dividing line L0 as an example, the end points at the two ends of the dividing line L0 can be calculated according to two oppositely arranged key points of the human body contour, and here, it is assumed that the coordinates of the end points at the two ends of the dividing line L0 are (x1, y1) and (x2, y2), respectively, where (x1, y1) can be calculated according to the position information of the key points [3, 11] of the human body contour, and (x2, y2) can be calculated according to the position information of the key points [4, 10] of the human body contour.
The angle formed by the line bar L0 with the horizontal line is calculated as follows:
θ0=arctan(abs(y2-y1)/abs(x2-x1))
wherein, theta0The included angle formed by the branch line L0 and the horizontal line is (x1, y1) the position information of the first end point of the branch line L0, and (x2, y2) the position information of the second end point of the branch line L0.
The included angle theta formed by the branch line L1 and the horizontal line can be obtained in the same way1And the angle theta formed by the branch line L1 and the horizontal line2Calculating an included angle formed by the first line and the horizontal line according to the following mode:
θ=θ012
wherein θ is an included angle formed by the first line and the horizontal line.
In the embodiment of the application, if the range of the activity angle is less than 60 degrees, when theta is less than or equal to 60 degrees, it is determined that the arm of the target person does not horizontally stretch, and at this time, a human body detection frame is displayed in the next video frame directly according to the target person identification information of the first video frame; when theta is larger than 60 degrees, the left arm of the target person is determined to be horizontally stretched, and at the moment, the human body detection frame needs to be stretched to cover the horizontally stretched left arm.
Based on the same principle, the middle points of every two oppositely arranged human body contour key points [55, 54, 53, 52] on the upper side of the right arm and the human body contour key points [47, 48, 49, 50] on the lower side of the right arm in fig. 7 can be obtained, branch lines R0, R1 and R2 representing the extending direction of the right arm are constructed according to the middle points, the included angles formed by the branch lines R0, R1 and R2 and the horizontal line are solved, the included angle between the first line representing the extending direction of the right arm and the horizontal line is obtained, and the included angle between the first line and the horizontal line is compared with the activity angle range, so that whether the right arm of the target person extends horizontally or not is determined.
The calculation method of the included angle between the branch lines R0, R1, and R2 and the horizontal line and the included angle between the first line representing the extending direction of the right arm and the horizontal line is the same as that of the left arm in the above embodiments, and details are not repeated here.
And adjusting the display mode of the human body detection frame in the video frame according to the horizontal extension conditions of the left arm and the right arm of the target person.
Specifically, as shown in fig. 8, it is a schematic diagram of live video character tracking in an embodiment of the present application; in this case, the image (a) is a display image of a first video frame including the human body detection frame 401, and the image (b) is a display image of a next video frame immediately adjacent to the first video frame.
In one embodiment, if it is determined that the left arm of the target person is horizontally stretched after the above steps, the human body detection frame 402 is displayed in the next video frame.
The human body detection frame 402 is a human body detection frame which is obtained by extending the human body detection frame 401 along the left side and can cover the left arm of the target after the target extends horizontally.
Fig. 9 is a schematic display diagram illustrating a live video person tracking method according to another embodiment of the present application; in which, the diagram (c) is a display screen of a first video frame, the first video frame includes the human body detection frame 401, and the diagram (d) is a display screen of a next video frame immediately adjacent to the first video frame.
In one embodiment, if it is determined that the left and right arms of the target person are horizontally stretched after the above steps, the human body detection box 403 is displayed in the next video frame.
The human body detection frame 403 can cover the arm of the target horizontally stretched according to the human body detection frame 401 extended along the left and right sides.
It should be noted that the above embodiments are only exemplary illustrations, and should not limit the function and scope of the disclosure.
The person skilled in the art can detect the activity of other key parts (for example, limbs) of the human body in combination with the above-mentioned content of the present application, and adjust the display mode of the target person identifier of the next video frame to realize the tracking of the live video object.
In the embodiment of the application, whether the size of the target character identifier needs to be adjusted is determined according to the stretching conditions of key parts such as arms and the like, and when the target character moves greatly, the size of the target character identifier is adjusted according to the target character movement information of the front and back video frames, so that the target character of the live video can be tracked in time, and the accuracy of human body contour detection is improved; and when the target person does not move greatly, the target person is directly identified in the next video frame according to the target person identification information of the previous video frame, human body detection is not needed to be carried out on each video frame, and the tracking efficiency of the live video target person is improved.
As shown in fig. 10, the embodiment of the present application further provides a live video person tracking apparatus, which may be implemented by software, hardware or a combination of both as a whole or a part of a computer device. The device includes:
an identification information obtaining module 501, configured to obtain target person identification information and a first target person identifier of a first video frame; the target person identification information is used for determining the position of a target person in a first video frame;
a next video frame obtaining module 502, configured to obtain a next video frame immediately adjacent to the first video frame;
a position information obtaining module 503, configured to obtain position information of a human contour key point of a human key portion of a target person in the first video frame; wherein, the key parts of the human body are the parts of the human body with the freedom of movement higher than a preset threshold;
an angle information obtaining module 504, configured to obtain, according to the position information of the key points of the human body contour of the key parts of the human body, tilt angle information of the key parts of the human body of the target person;
and an identification module 505, configured to determine a display position of a first target person identifier in a next video frame according to the target person identification information of the first video frame if the tilt angle information of the human body key part of the target person is within a preset activity angle range, and display the first target person identifier in the next video frame.
It should be noted that, when the live video person tracking apparatus provided in the foregoing embodiment executes the live video person tracking method, only the division of the above functional modules is taken as an example, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules, so as to complete all or part of the above described functions. In addition, the live video character tracking device provided by the embodiment and the live video character tracking method belong to the same concept, and the detailed implementation process is shown in the method embodiment and is not described herein again.
The embodiment provides an electronic device, which can be used for executing all or part of the steps of the live video character tracking method in the embodiment of the application. For details not disclosed in the present embodiment, please refer to the method embodiments of the present application.
Referring to fig. 11, fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device 600 may be, but is not limited to, a combination of one or more of various servers, personal computers, notebook computers, smart phones, tablet computers, and the like.
In the preferred embodiment of the present application, the electronic device 600 includes a memory 601, at least one processor 602, at least one communication bus 603, and a transceiver 604.
Those skilled in the art will appreciate that the configuration of the electronic device shown in fig. 11 does not constitute a limitation of the embodiments of the present application, and may be a bus-type configuration or a star-type configuration, and that the electronic device 600 may include more or less hardware or software than those shown, or a different arrangement of components.
In some embodiments, the electronic device 600 is a device capable of automatically performing numerical calculation and/or information processing according to instructions set or stored in advance, and the hardware includes but is not limited to a microprocessor, an application specific integrated circuit, a programmable gate array, a digital processor, an embedded device, and the like. The electronic device 600 may further include a client device, which includes, but is not limited to, any electronic product capable of interacting with a client through a keyboard, a mouse, a remote controller, a touch pad, or a voice control device, for example, a personal computer, a tablet computer, a smart phone, a digital camera, and the like.
It should be noted that the electronic device 600 is only an example, and other existing or future electronic products, such as those that may be adapted to the present application, are also included in the scope of the present application and are incorporated by reference herein.
In some embodiments, the memory 601 stores a computer program that when executed by the at least one processor 602 implements all or part of the steps of a live video person tracking method according to the first embodiment. The Memory 601 includes a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), a One-time Programmable Read-Only Memory (OTPROM), an electronically Erasable rewritable Read-Only Memory (Electrically-Erasable Programmable Read-Only Memory (EEPROM)), an optical Read-Only Memory (CD-ROM) or other optical disk Memory, a magnetic disk Memory, a tape Memory, or any other medium readable by a computer that can be used to carry or store data.
In some embodiments, the at least one processor 602 is a Control Unit (Control Unit) of the electronic device 600, connects various components of the electronic device 600 by various interfaces and lines, and executes various functions and processes data of the electronic device 600 by executing or executing programs or modules stored in the memory 601 and calling data stored in the memory 601. For example, the at least one processor 602, when executing the computer program stored in the memory, implements all or part of the steps of the live video person tracking method described in the embodiments of the present application; or implement all or part of the functionality of a live video character tracking device. The at least one processor 602 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same function or different functions, and includes one or more Central Processing Units (CPUs), microprocessors, digital processing chips, graphics processors, and combinations of various control chips.
In some embodiments, the at least one communication bus 603 is arranged to enable connectivity communication between the memory 601 and the at least one processor 602, etc.
The electronic device 600 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
The present embodiment provides a computer-readable storage medium, where a computer program is stored, where the instructions are suitable for being loaded by a processor and executing the live video person tracking method according to the first embodiment of the present application, and a specific execution process may refer to a specific description of the first embodiment, which is not described herein again.
For the apparatus embodiment, since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points. The above-described device embodiments are merely illustrative, wherein the components described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A live video person tracking method, the method comprising:
acquiring target figure identification information and a first target figure identifier of a first video frame; the target person identification information is used for determining the position of a target person in a first video frame;
acquiring a next video frame next to the first video frame;
acquiring position information of human body outline key points of human body key parts of a target person in the first video frame; wherein, the key parts of the human body are the parts of the human body with the freedom of movement higher than a preset threshold;
acquiring the inclination angle information of the human body key part of the target character according to the position information of the human body contour key points of the human body key part;
and if the inclination angle information of the key human body part of the target person is within a preset activity angle range, determining the display position of a first target person identification in a next video frame according to the target person identification information of the first video frame, and displaying the first target person identification in the next video frame.
2. The live video character tracking method of claim 1, further comprising the steps of:
if the inclination angle information of the key human body part of the target person exceeds a preset activity angle range, acquiring position information of the position of the target person in the next video frame and a second target person identifier;
and displaying the second target person identification at the position of the target person in the next video frame according to the position information of the position of the target person in the next video frame.
3. The live video character tracking method of claim 1, further comprising the steps of:
if the inclination angle information of the key human body part of the target character exceeds a preset activity angle range, acquiring a second target character identifier; wherein the second target persona identification is different from the first target persona identification;
and determining the display position of a second target person identification in the next video frame according to the target person identification information of the first video frame, and displaying the second target person identification in the next video frame.
4. A live video character tracking method as claimed in any one of claims 2-3, wherein said step of obtaining a second target character identification comprises:
determining the moving direction information of the target person according to the inclination angle information of the key parts of the human body of the target person;
and stretching the first target character identification along the moving direction of the target character according to a preset size adjustment rule and the moving direction information of the target character to obtain a second target character identification.
5. The live video character tracking method according to claim 1, wherein the number of the human body contour key points is at least two, and the step of obtaining the tilt angle information of the human body key part of the target character comprises:
constructing a first line for representing the extending direction of the key parts of the human body according to the position information of the key points of the at least two human body contours;
and acquiring an included angle formed by the first line and a horizontal line.
6. The live video character tracking method of claim 5, wherein the step of obtaining an included angle formed by the first line and a horizontal line comprises:
acquiring position information of a first end point and position information of a second end point of a first line;
θ0=arctan(abs(y2-y1)/abs(x2-x1))
wherein, theta0The included angle formed by the first line and the horizontal line is (x1, y1) the position information of the first endpoint, and (x2, y2) the position information of the second endpoint.
7. The live video character tracking method of claim 4, wherein when there are at least three key points of the human body contour, the step of obtaining the tilt angle information of the key parts of the human body of the target character comprises:
constructing a plurality of branch lines according to the position information of every two adjacent human body contour key points;
and obtaining the included angle between each branching strip and the horizontal line, and adding the included angles to obtain the included angle formed by the first strip and the horizontal line.
8. A live video character tracking apparatus, the apparatus comprising:
the identification information acquisition module is used for acquiring the identification information of the target person and the first target person identification of the first video frame; the target person identification information is used for determining the position of a target person in a first video frame;
the next video frame acquisition module is used for acquiring a next video frame adjacent to the first video frame;
the position information acquisition module is used for acquiring the position information of the human body outline key points of the human body key parts of the target person in the first video frame; wherein, the key parts of the human body are the parts of the human body with the freedom of movement higher than a preset threshold;
the angle information acquisition module is used for acquiring the inclination angle information of the human body key part of the target person according to the position information of the human body contour key points of the human body key part;
and the identification module is used for determining the display position of the first target character identification in the next video frame according to the target character identification information of the first video frame if the inclination angle information of the human body key part of the target character is within a preset activity angle range, and displaying the first target character identification in the next video frame.
9. An electronic device, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform a live video person tracking method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, the computer program, when being executed by a processor, implementing a live video person tracking method according to any one of claims 1 to 7.
CN202210150699.4A 2022-02-18 Live video character tracking method, device, equipment and storage medium Active CN114466218B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210150699.4A CN114466218B (en) 2022-02-18 Live video character tracking method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210150699.4A CN114466218B (en) 2022-02-18 Live video character tracking method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114466218A true CN114466218A (en) 2022-05-10
CN114466218B CN114466218B (en) 2024-04-23

Family

ID=

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390282A (en) * 2013-07-30 2013-11-13 百度在线网络技术(北京)有限公司 Image tagging method and device
US20170053167A1 (en) * 2015-08-18 2017-02-23 Qualcomm Incorporated Systems and methods for object tracking
CN109671142A (en) * 2018-11-23 2019-04-23 南京图玩智能科技有限公司 A kind of intelligence makeups method and intelligent makeups mirror
CN109740513A (en) * 2018-12-29 2019-05-10 青岛小鸟看看科技有限公司 A kind of analysis of operative action method and apparatus
CN109800685A (en) * 2018-12-29 2019-05-24 上海依图网络科技有限公司 The determination method and device of object in a kind of video
CN110536151A (en) * 2019-09-11 2019-12-03 广州华多网络科技有限公司 The synthetic method and device of virtual present special efficacy, live broadcast system
CN110852254A (en) * 2019-11-08 2020-02-28 杭州网易云音乐科技有限公司 Face key point tracking method, medium, device and computing equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390282A (en) * 2013-07-30 2013-11-13 百度在线网络技术(北京)有限公司 Image tagging method and device
US20170053167A1 (en) * 2015-08-18 2017-02-23 Qualcomm Incorporated Systems and methods for object tracking
CN109671142A (en) * 2018-11-23 2019-04-23 南京图玩智能科技有限公司 A kind of intelligence makeups method and intelligent makeups mirror
CN109740513A (en) * 2018-12-29 2019-05-10 青岛小鸟看看科技有限公司 A kind of analysis of operative action method and apparatus
CN109800685A (en) * 2018-12-29 2019-05-24 上海依图网络科技有限公司 The determination method and device of object in a kind of video
CN110536151A (en) * 2019-09-11 2019-12-03 广州华多网络科技有限公司 The synthetic method and device of virtual present special efficacy, live broadcast system
CN110852254A (en) * 2019-11-08 2020-02-28 杭州网易云音乐科技有限公司 Face key point tracking method, medium, device and computing equipment

Similar Documents

Publication Publication Date Title
CN106792092B (en) Live video stream split-mirror display control method and corresponding device thereof
US10171794B2 (en) Method for selecting cameras and image distribution system capable of appropriately selecting cameras
KR102463304B1 (en) Video processing method and device, electronic device, computer-readable storage medium and computer program
US11558562B2 (en) Apparatus and method for providing 360-degree panoramic background during video call
CN111491174A (en) Virtual gift acquisition and display method, device, equipment and storage medium
US9996220B2 (en) Multi-zone interface switching method and device
US20220191557A1 (en) Method for displaying interaction data and electronic device
CN112312111A (en) Virtual image display method and device, electronic equipment and storage medium
WO2022016915A1 (en) Advertisement information positioning method and corresponding apparatus therefor, advertisement information display method and corresponding apparatus therefor, device, and medium
CN106791915B (en) Method and device for displaying video image
CN114387400A (en) Three-dimensional scene display method, display device, electronic equipment and server
CN114339363B (en) Picture switching processing method and device, computer equipment and storage medium
US20150371449A1 (en) Method for the representation of geographically located virtual environments and mobile device
CN114374853A (en) Content display method and device, computer equipment and storage medium
CN111343409B (en) Method and system for initiating and synchronizing dynamic arrangement of multiple video windows
CN113556481A (en) Video special effect generation method and device, electronic equipment and storage medium
CN114466218B (en) Live video character tracking method, device, equipment and storage medium
CN114466218A (en) Live video character tracking method, device, equipment and storage medium
CN114679591A (en) Video proportion switching method, device and medium for live broadcast room and computer equipment
CN114245158B (en) Live broadcast room head portrait special effect display method and device, equipment, medium and product thereof
CN112672057B (en) Shooting method and device
CN112887793A (en) Video processing method, display device, and storage medium
CN114157875B (en) VR panoramic video preprocessing method, VR panoramic video preprocessing equipment and VR panoramic video storage medium
US20240020910A1 (en) Video playing method and apparatus, electronic device, medium, and program product
WO2019034556A1 (en) Method and device for displaying visual content and for processing an image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant