CN114466218B - Live video character tracking method, device, equipment and storage medium - Google Patents

Live video character tracking method, device, equipment and storage medium Download PDF

Info

Publication number
CN114466218B
CN114466218B CN202210150699.4A CN202210150699A CN114466218B CN 114466218 B CN114466218 B CN 114466218B CN 202210150699 A CN202210150699 A CN 202210150699A CN 114466218 B CN114466218 B CN 114466218B
Authority
CN
China
Prior art keywords
human body
target person
video frame
target
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210150699.4A
Other languages
Chinese (zh)
Other versions
CN114466218A (en
Inventor
宫凯程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN202210150699.4A priority Critical patent/CN114466218B/en
Publication of CN114466218A publication Critical patent/CN114466218A/en
Application granted granted Critical
Publication of CN114466218B publication Critical patent/CN114466218B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to a live video character tracking method, a live video character tracking device, live video character tracking equipment and a storage medium, and belongs to the technical field of network live broadcasting. According to the application, the movement condition of the human body key part of the target person is determined according to the inclination angle information of the human body key part of the target person, when the inclination angle information of the human body key part of the target person is in the preset movable angle range, the display position of the first target person identifier in the next video frame is determined directly according to the target person identification information of the first video frame, the first target person identifier is displayed in the next video frame, and human body detection is not required for each video frame in the live video, so that the tracking efficiency of the target person in the live video is improved.

Description

Live video character tracking method, device, equipment and storage medium
Technical Field
The present application relates to the field of network live broadcasting technologies, and in particular, to a live video person tracking method, apparatus, device, and storage medium.
Background
With the development of internet technology, watching live video of a host broadcast in a live broadcast room gradually becomes a daily entertainment activity of people. The live broadcast platform is used as a medium between a host and a spectator, the host uploads live video data to the live broadcast platform through equipment such as a camera, and then the live broadcast platform sends a client of the spectator to play and watch.
During live broadcasting, a host can add corresponding special effects (such as local deformation) to the character image displayed on the live broadcasting picture so as to improve the watching experience of audiences, but in the process of adding special effects, human body detection is needed to be carried out on each video frame, then the displayed character image is correspondingly adjusted according to the human body detection result of each frame, and the character tracking efficiency is low.
Disclosure of Invention
Based on the above, the application aims to provide a live video character tracking method, a device, equipment and a storage medium, and provides the live video character tracking method which can improve the live video character tracking efficiency.
According to a first aspect of an embodiment of the present application, there is provided a live video person tracking method, including:
Acquiring target person identification information of a first video frame and a first target person identification; the target person identification information is used for determining the position of a target person in the first video frame;
acquiring a next video frame adjacent to the first video frame;
Acquiring position information of human body contour key points of human body key parts of a target person in the first video frame; wherein the key parts of the human body are parts of the human body with the activity degree of freedom higher than a preset threshold value;
Acquiring inclination angle information of the human body key part of the target person according to the position information of the human body contour key point of the human body key part;
And if the inclination angle information of the human body key part of the target person is in the preset activity angle range, determining the display position of the first target person identifier in the next video frame according to the target person identification information of the first video frame, and displaying the first target person identifier in the next video frame.
According to a second aspect of an embodiment of the present application, there is provided a live video person tracking apparatus, the apparatus comprising:
the identification information acquisition module is used for acquiring the target person identification information of the first video frame and the first target person identification; the target person identification information is used for determining the position of a target person in the first video frame;
a next video frame acquisition module, configured to acquire a next video frame immediately adjacent to the first video frame;
The position information acquisition module is used for acquiring position information of human body contour key points of human body key parts of the target person in the first video frame; wherein the key parts of the human body are parts of the human body with the activity degree of freedom higher than a preset threshold value;
the angle information acquisition module is used for acquiring the inclination angle information of the human body key part of the target person according to the position information of the human body contour key point of the human body key part;
and the identification module is used for determining the display position of the first target personal identification in the next video frame according to the target personal identification information of the first video frame if the inclination angle information of the human body key part of the target personal is in the preset activity angle range, and displaying the first target personal identification in the next video frame.
According to a third aspect of the embodiment of the present application, there is provided an electronic device including: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform any one of the live video person tracking methods.
According to a fourth aspect of embodiments of the present application, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor implements any one of the live video person tracking methods.
According to the application, the movement condition of the human body key part of the target person is determined according to the inclination angle information of the human body key part of the target person, when the inclination angle information of the human body key part of the target person is in the preset activity angle range, the display position of the first target person identifier in the next video frame is determined directly according to the target person identification information of the first video frame, the first target person identifier is displayed in the corresponding position in the next video frame, and human body detection is not required for each video frame in the live video, so that the tracking efficiency of the target person in the live video is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
For a better understanding and implementation, the present application is described in detail below with reference to the drawings.
Drawings
Fig. 1 is a schematic diagram of an application environment of a live video person tracking method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of live video character tracking according to one embodiment of the present application;
fig. 3 is a flowchart of a live video person tracking method according to an embodiment of the present application;
FIG. 4 is an exemplary diagram of a human body contour sampling point provided in one embodiment of the present application;
Fig. 5 is a flowchart of a live video person tracking method according to another embodiment of the present application;
fig. 6 is a flowchart of a live video person tracking method according to another embodiment of the present application;
Fig. 7 is a schematic diagram of acquiring inclination angle information of a key part of a human body of a target person according to an embodiment of the present application;
FIG. 8 is a schematic diagram of live video character tracking according to one embodiment of the present application;
FIG. 9 is a schematic diagram of live video character tracking according to one embodiment of the present application;
Fig. 10 is a schematic structural diagram of a live video person tracking apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the following detailed description of the embodiments of the present application will be given with reference to the accompanying drawings.
It should be understood that the described embodiments are merely some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application as detailed in the accompanying claims.
In the description of the present application, it should be understood that the terms "first," "second," "third," and the like are used merely to distinguish between similar objects and are not necessarily used to describe a particular order or sequence, nor should they be construed to indicate or imply relative importance. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art according to the specific circumstances. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The word "if"/"if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination". Furthermore, in the description of the present application, unless otherwise indicated, "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
As will be appreciated by those skilled in the art, the terms "client," "terminal device," and "terminal device" as used herein include both devices that include only wireless signal receivers without transmitting capabilities and devices that include receiving and transmitting hardware that include devices that are capable of two-way communication over a two-way communication link. Such a device may include: a cellular or other communication device such as a personal computer, tablet, or the like, having a single-line display or a multi-line display or a cellular or other communication device without a multi-line display; PCS (PersonalCommunications Service, personal communications System) that may combine voice, data processing, facsimile and/or data communications capabilities; PDA (Personal DIGITAL ASSISTANT ) that may include a radio frequency receiver, pager, internet/intranet access, web browser, notepad, calendar and/or GPS (Global PositioningSystem ) receiver; a conventional laptop and/or palmtop computer or other appliance that has and/or includes a radio frequency receiver. As used herein, "client," "terminal device" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or adapted and/or configured to operate locally and/or in a distributed fashion, at any other location(s) on earth and/or in space. As used herein, a "client," "terminal device," or "terminal device" may also be a communication terminal, an internet terminal, or a music/video playing terminal, for example, may be a PDA, a MID (Mobile INTERNET DEVICE ), and/or a Mobile phone with a music/video playing function, or may also be a device such as a smart tv, a set top box, or the like.
The application refers to the hardware of server, client, service node, etc., which is essentially the computer equipment with personal computer, etc., and is the hardware device with the necessary components revealed by von neumann principle, such as central processing unit (including arithmetic unit and controller), memory, input equipment and output equipment, etc., the computer program is stored in the memory, the central processing unit calls the program stored in the external memory to run, executes the instructions in the program, and interacts with the input and output equipment, thereby completing the specific functions.
It should be noted that the concept of the present application, called "server", is equally applicable to the case of server clusters. The servers should be logically partitioned, physically separate from each other but interface-callable, or integrated into a physical computer or group of computers, according to network deployment principles understood by those skilled in the art. Those skilled in the art will appreciate this variation and should not be construed as limiting the implementation of the network deployment approach of the present application.
Referring to fig. 1, fig. 1 is a schematic application scenario diagram of a live video character tracking method according to an embodiment of the present application, where the application scenario includes a hosting end 20, a server end 10 and a viewer end 30 according to an embodiment of the present application. The anchor side 20 interacts with the viewer side 30 through the server side 10.
The anchor terminal 20 is a terminal for transmitting live webcast video, and is generally referred to as a viewer terminal used by an anchor in live webcast.
The viewer side 30 refers to a side that receives and views live webcast video, and is typically the viewer side employed by viewers viewing video in live webcasts.
The hardware pointed at by the anchor end 20 and the viewer end 30 essentially refers to computer devices, and in particular, as shown in fig. 1, may be smart phones, smart interactive tablets, personal computers, and the like. Both the anchor side 20 and the viewer side 30 can access the internet through a well-known network access manner, and establish a data communication link with the server side 10.
The server side 10 acts as a service server and may be responsible for further connecting to related audio data servers, video streaming servers, other servers providing related support, etc., to form a logically related service cluster for providing services to related terminal devices, such as the anchor side 20 and the viewer side 30 shown in fig. 1.
In a live broadcasting room, interaction between a host and audiences can be realized through well-known online interaction modes such as voice, video, characters and the like, generally, host users perform programs in the form of audio and video streams for the audiences, and economic transaction behaviors can be generated in the interaction process. Of course, the live video person tracking method of the embodiment of the application can also be generalized to other related scenes, for example: short videos, and any other scene where tracking of a target object in a video is desired.
Specifically, the process of viewing a live broadcast by a viewer is as follows: the viewer can click to access a live broadcast application program (for example YY) installed on the viewer terminal 30, and select to enter any live broadcast room, and trigger the viewer terminal 30 to load a live broadcast room interface for the viewer, wherein the live broadcast room interface comprises a plurality of interaction components, and the viewer can watch live broadcast in the live broadcast room and perform various online interactions by loading the interaction components.
In the live broadcast process, the host can add corresponding special effects (such as local deformation such as weight reduction) to the characters displayed on the live broadcast picture so as to improve the live broadcast watching experience of the audience, but if special effects are to be added to the target characters on the interface of the live broadcast room, the target characters in the live broadcast room can move at any time, so that the positions of the target characters need to be detected and tracked in real time, and when the target characters are tracked, the target characters can be selected in the live broadcast picture by adopting a character detection frame 401 shown in fig. 2 so as to identify the positions of the target characters in the live broadcast room.
When the target person moves, the position of the target person needs to be re-identified, the target person is re-framed by the person detection frame 401, so as to track the target person, and in the prior art, when the target person moving in the video is tracked, the human body detection is performed on each video frame, and then the displayed target person is identified according to the human body detection result of each frame, so that the tracking efficiency is low.
Accordingly, referring to fig. 3, an embodiment of the present application provides a live video person tracking method, including the following steps:
S101: acquiring target person identification information of a first video frame and a first target person identification; the target person identification information is used for determining the position of a target person in the first video frame;
In general, after a main broadcasting end is started, a live video is collected by using a self-contained camera of the main broadcasting end or an external camera which establishes data connection with the main broadcasting end, wherein the live video comprises a plurality of frames of live broadcasting pictures.
The first video frame may be a live video frame of a live video including a target person.
In one embodiment, the first video frame may be a live view of the current point in time; in another embodiment, the first video frame may also be a live broadcast picture of the target person appearing for the first time in the live broadcast video, specifically, a human body detection technology may be used to detect each frame live broadcast picture of the live broadcast video, and according to the time sequence of the video frames of the detected target person, the live broadcast picture of the target person appearing for the first time is determined, and the live broadcast picture is taken as the first video frame.
The target person identification information is used for determining the position of the target person in the first video frame; when the first video frame is a live frame of a live video, the target person may be a host or guest of the live room.
Specifically, the existing human body detection technology can be adopted to detect the human body of the target person in the first video frame, so as to obtain the target person identification information of the first video frame.
The target person identification information may be area information of an area where the target person of the first video frame is located, for example, when the existing human bounding box detection algorithm is used to detect the target person of the first video frame, the target person identification information may include information such as a size and a position of the generated human bounding box.
The first target personal identifier may be a graphic, icon, etc. preset by the user and used for determining the location of the target person in the living room.
Or in one embodiment, the first target person identifier is an identifier such as a graphic or icon that can cover an area where the target person in the living room is located, and the identifier can be generated according to the target person identification information of the first video frame. For example, as shown in FIG. 2, in an embodiment of the present application, the first target persona may be a minimum bounding rectangular box for framing all parts of the target persona, including the torso and head.
In one embodiment, when the first video frame is a live broadcast picture including the target person, the corresponding target person identifier may also be displayed in the first video frame according to the target person identification information of the first video frame, where the generated identifier may cover an area where the target person in the live broadcast room is located, such as a graphic or an icon; for example, the target persona may be a minimum bounding rectangular box for framing all parts of the target persona, including the torso and head.
The size of the target personal identification and the position displayed in the first video frame may be set according to the target personal identification information of the first video frame and the user's requirement.
And displaying the corresponding target personal identification on the video frame containing the target person in the live video, and dynamically tracking the target person of the live video by utilizing the target personal identification.
S102: acquiring a next video frame adjacent to the first video frame;
S103: acquiring position information of human body contour key points of human body key parts of a target person in the first video frame;
The existing human body detection technology is easy to cause detection errors at the frequently movable parts such as wrists and arms of the human body, for example, detection inaccuracy is easy to occur when the arms horizontally extend, so that the generated human body detection frame is cut off at the parts such as wrists and arms, and the accuracy of target person tracking is affected.
Therefore, in order to solve the above problems, the embodiment of the application further determines the target character display mode of the next video frame according to the position information of the human body contour key points of the human body key parts by acquiring the position information of the human body key parts of the target character, thereby improving the accuracy of target character tracking.
The human body contour key points are human body contour sampling points of human body key positions, wherein the human body contour sampling points can be obtained in a manual labeling mode according to the human body positions contained in the live image.
In a preferred embodiment, the body contour sampling points may be 64 sampling points around the body, as shown in fig. 4, involving various parts of the body.
The key parts of the human body are the parts of the human body with the activity degree of freedom higher than a preset threshold value, and can be determined according to the daily activity condition of the human body.
Wherein, the more often the key part of the human body moves, the higher the corresponding degree of freedom of movement is; for example, the key parts of the human body can be frequently movable parts such as wrists, arms and the like.
Specifically, the critical part of the human body may be the wrist or the forearm. As shown in fig. 4, when the critical human portion is the forearm, the contour key points may include contour sampling points [3,4,5,6,8,9, 10, 11] of the left-hand forearm portion and contour sampling points [47, 48, 49, 50, 52, 53, 54, 54] of the right-hand forearm portion.
In one embodiment, the location information of the human body contour key points may be obtained by identifying human body contour key points of the human body key parts of the target person in the first video frame using an existing contour point detection algorithm.
Preferably, in the embodiment of the present application, the area where the target person is located is determined based on the target person identification information of the first video frame, the area where the target person is located is detected by using the human contour point detection model, the position information of all human contour sampling points is obtained, and then the position information of the human contour key points is further obtained. By determining the region where the target person is located and then acquiring the human body contour sampling points and the human body contour key points, the positioning accuracy of the human body contour key points can be effectively improved, and the tracking accuracy of the target person can be improved.
S104: acquiring inclination angle information of the human body key part of the target person according to the position information of the human body contour key point of the human body key part;
the inclination angle of the key part of the human body may be an included angle formed by a line segment constructed according to key points of the human body contour and a horizontal line, wherein at least two key points of the human body contour are provided, and the line segment constructed according to the key points of the human body contour may be used for determining the movement direction of the key part of the human body.
The method comprises the steps of obtaining inclination angle information of a human body key part of a target person, constructing a line used for representing the movement direction of the human body key part according to position information of a human body contour key point, and obtaining an included angle formed by the line and a horizontal line based on a trigonometric function principle to obtain the inclination angle information of the human body key part of the target person.
Specifically, the step of acquiring inclination angle information of a key part of a human body of a target person includes:
Constructing a first line for representing the extending direction of the key part of the human body according to the position information of the key points of the at least two human body contours;
and acquiring an included angle formed by the first line and the horizontal line.
In one embodiment, when the first line for representing the extending direction of the key part of the human body is constructed, the first line can be obtained by directly connecting key points of the human body contour.
If the first line is obtained by directly connecting the key points of the human body contour, the first end point and the second end point of the first line are the position information of the two key points of the human body contour, and specifically, the step of obtaining the included angle formed by the first line and the horizontal line comprises the following steps:
acquiring position information of a first endpoint and position information of a second endpoint of the first line;
θ0=arctan(abs(y2-y1)/abs(x2-x1))
Wherein θ 0 is an angle formed between the first line and the horizontal line, (x 1, y 1) is the position information of the first endpoint, and (x 2, y 2) is the position information of the second endpoint.
In one embodiment, when the human body contour key points are a plurality of human body contour sampling points on the key part, a plurality of lines can be constructed according to the human body contour key points, the included angles between each line and the horizontal line are calculated respectively, and the included angle formed by the first line and the horizontal line is obtained by adding or averaging the included angles between each line and the horizontal line.
Specifically, in the embodiment of the present application, when the key points of the human body contour are at least three, the step of obtaining the inclination angle information of the key parts of the human body of the target person includes:
constructing a plurality of parting lines according to the position information of every two adjacent human body contour key points;
and obtaining and adding the included angle between each branching strip and the horizontal line to obtain the included angle formed by the first line and the horizontal line.
The lines may be lines between every two adjacent key points of the human body contour, where the included angle between each line and the horizontal line may be obtained according to the above formula, and will not be described herein again.
Since the contour key points in the present case are the contour sampling points of the human body key parts, the contour sampling points generally include the contour sampling points on both sides of the key parts (e.g. the upper contour sampling points [3,4,5,6] and the lower contour sampling points [8,9, 10, 11] of the left arm in fig. 4), the contour sampling points on both sides of the key parts may have a certain deviation along with the trend of the muscle, and if the first line is simply constructed by the contour sampling points on one side, the judgment of the stretching direction of the key parts of the human body may be affected; if the line is constructed on the human body contour sampling points on both sides of the key part and the included angle formed by the line and the horizontal line is calculated, the calculated data volume is larger.
Thus, in a preferred embodiment, the body contour keypoints are at least four, and wherein every two body contour keypoints are oppositely disposed, the first and second endpoints of the first line are midpoints of one set of two oppositely disposed body contour keypoints, and the second endpoint of the first line is a midpoint of the other set of two oppositely disposed body contour keypoints.
The step of constructing a first line for representing the extending direction of a key part of a human body specifically comprises the following steps:
And acquiring the position information of the middle points of the oppositely arranged human body contour key points, and constructing a first line for representing the extending direction of the human body key parts according to the position information of the middle points of the oppositely arranged human body contour key points.
The first lines representing the extending directions of the key parts of the human body are obtained by connecting the midpoints of the key points of the human body contours which are oppositely arranged at the two sides of the key parts, so that the judgment of the extending directions of the key parts of the human body due to inconsistent trend of the sampling points of the human body contours at the two sides of the key parts is avoided, and the tracking accuracy of the live video character can be effectively improved.
S105: and if the inclination angle information of the human body key part of the target person is in the preset activity angle range, determining the display position of the first target person identifier in the next video frame according to the target person identification information of the first video frame, and displaying the first target person identifier in the next video frame.
The movable angle range can be set according to the actual requirements of users; for example, the range of the movable angle may be set to be smaller than a certain preset angle threshold, for example 60 °.
When the included angle formed by the first line and the horizontal line is smaller than or equal to a preset included angle threshold, determining that the inclination angle information of the key part of the human body of the target person is in a preset movable angle range, and determining the display position of the first target person identifier in the next video frame and displaying the first target person identifier according to the target person identification information of the first video frame; if the included angle formed by the first line and the horizontal line is larger than a preset included angle threshold value, determining that the movement track information of the target person exceeds a preset movable angle range.
The determining of the display position of the first target person identifier in the next video frame based on the target person identifier in the first video frame may be determining the position to be surrounded by the first target person identifier in the next video frame based on the target person identifier, so as to determine the display position of the first target person identifier in the next video frame, for example, the position to be surrounded by the human body detection frame may be determined based on the human body detection result in the first video frame, and a human body detection frame that can frame all parts (including the trunk and the head) of the target person may be displayed in the next video frame.
According to the embodiment of the application, the human body key part of the target person is the human body part with high activity degree of freedom, the activity condition of the target person is determined according to the activity condition of the human body key part of the target person, whether the display position of the first target person identification of the next video frame can be determined according to the target person identification information of the first video frame or not is determined according to the activity condition of the target person, human body detection on the next video frame is not needed, and the human body contour detection frequency is reduced.
In an alternative embodiment, the execution subject of the live video person tracking method may be a host, and in another alternative embodiment, the execution subject of the live video person tracking method may be a server.
According to the application, the movement condition of the human body key part of the target person is determined according to the inclination angle information of the human body key part of the target person, when the inclination angle information of the human body key part of the target person is in the preset movable angle range, the first target person identification is directly displayed at the corresponding position in the next video frame according to the target person identification information of the first video frame, and each video frame in the live video does not need to be subjected to human body detection, so that the tracking efficiency of the target person in the live video is improved.
For example, when the key part is the left arm, according to the position information of the human body outline key point on the left arm, the stretching angle of the left arm is determined, whether the stretching condition of the left arm of the target person exceeds the original coverage range of the first target person identifier is determined according to the stretching angle of the left arm, if the stretching condition of the left arm of the target person is still within the coverage range of the first target person identifier, the display position of the first target person identifier in the next video frame is determined directly according to the target person identifier information of the first video frame, and the first target person identifier is displayed in the corresponding position in the next video frame.
When the inclination angle information of the key part of the human body of the target person exceeds the preset activity angle range, the target person identification in the next video frame needs to be adjusted, so that the target person identification displayed in the next video frame can cover the position of the moved target person.
The live video character tracking method can be integrated into a human special effect product (such as an electronic device provided with a live client, a video playing client or a video image processing (such as a weight reducing and long leg) client) along with a human body segmentation function, or integrated into an application product for motion recognition along with a motion recognition function (such as a live platform can capture the motion of a host user and is used for adding motion special effects and the like), so as to promote live watching interestingness and user experience.
In an alternative embodiment, as shown in fig. 5, the live video person tracking method further comprises the steps of:
s201: if the inclination angle information of the key part of the human body of the target person exceeds a preset activity angle range, acquiring the position information of the position of the target person in the next video frame and a second target person identifier;
The position information of the target person in the next video frame can be obtained by using the existing human body detection technology.
The second target personal identifier may be a graphic, icon, etc. preset by the user to determine the location of the target person in the living room.
It should be noted that, in the embodiment of the present application, the second target personal identification is different from the first target personal identification.
Wherein the second target persona may be a different graphic or icon than the first target persona, or the second target persona may be the same graphic or icon as the first target persona, but the second target persona may be a different size than the first target persona;
s202: and displaying the second target personal identification at the position of the target person in the next video frame according to the position information of the position of the target person in the next video frame.
In the embodiment, the position of the target person in the next video frame is acquired by using a human body detection technology, and a preset second target person identifier is displayed at the corresponding position of the next video frame, so that tracking of the target person in the live video is realized.
In the existing human body detection technology, detection abnormality easily occurs at key parts such as wrists and arms of a human body, if a target person is identified by a human body detection frame in a live video directly according to a human body detection result, the key parts such as wrists and arms of the human body are cut off, and subsequent processing of the live video is affected.
Thus, in response to the above-described problems, as shown in fig. 6, in a preferred embodiment, the live video person tracking method further comprises the steps of:
s301: if the inclination angle information of the key part of the human body of the target person exceeds a preset activity angle range, acquiring a second target person identification;
S302: and determining the display position of a second target personal identification in a next video frame according to the target personal identification information of the first video frame, and displaying the second target personal identification in the next video frame.
In this embodiment, the display position of the second target personal identification is determined based on the target personal identification information of the first video frame, and when the first target personal identification is displayed in the first video frame and the display position of the first target personal identification is also determined based on the target personal identification information of the first video frame, the display position of the second target personal identification may be the same as the display position of the first target personal identification.
When the inclination angle information of the key part of the human body of the target person exceeds the preset activity angle range, determining the display position of the second target person identification in the next video frame according to the target person identification information of the first video frame, and displaying different second target person identifications in the corresponding positions in the next video frame.
Wherein the second target persona may cover the area of the target persona within the living room, e.g., the second target persona may be a minimum bounding rectangular box for framing all parts of the target persona, including torso and head. When the target person is moved greatly, the second target person identification with larger or smaller size can be obtained to track the target person in the live video.
Optionally, the step of obtaining the second target person identifier includes:
determining moving direction information of the target person according to the inclination angle information of the key part of the human body of the target person;
and stretching the first target personal identification along the moving direction of the target person according to a preset size adjustment rule and the moving direction information of the target person to obtain a second target personal identification.
In this embodiment, the second target persona may be the same graphic or icon as the first target persona, but the second target persona may be different in size from the first target persona.
Further, the second target person identification and the first target person identification are human body detection frames, which are minimum surrounding rectangular frames for framing all parts (including the trunk and the head) of the target person.
The preset resizing rule may be used to determine a stretching dimension of the first target persona, for example, when the first target persona is a human detection frame, the preset resizing rule may be used to determine a flaring length of the human detection frame.
For example, when the key part of the human body is an arm, the expansion length can be set to be a target multiple of the expansion width of the human body detection frame, and the target multiple can be set according to the actual application requirement, for example, the target multiple can be 0.1 time.
When the inclination angle information of the key human body part of the target person exceeds a preset movable angle range, determining the moving direction information of the target person according to the specific key human body part exceeding the preset movable angle range; for example, when the inclination angle information of the left arm exceeds the preset range of the movement angle, it is determined that the target person performs the left arm horizontal stretching action.
It will be appreciated that when the target person performs such actions as stretching the left arm horizontally, the original first target person's logo cannot cover the horizontally stretched left arm, and therefore the first target person's logo needs to be stretched in the moving direction of the left arm so as to cover the horizontally stretched left arm.
The following specifically describes a live video character tracking method in the embodiment of the application by taking an arm as a key part of a human body:
As shown in fig. 7, the contour key points of the left arm part include upper contour sampling points [3,4,5,6] and lower contour sampling points [8,9, 10, 11].
Obtaining the middle points of every two oppositely arranged key points of the human body outline, constructing lines representing the extending direction of the left arm according to the middle points, for example, constructing a parting line L0 according to the middle points of [3, 11] and the middle points of [4, 10], and similarly constructing a parting line L1 according to the middle points of [5,9] and the middle points of [4, 10], and constructing a parting line L2 according to the middle points of [5,9] and the middle points of [6,8 ].
Based on the trigonometric function, the included angles formed by the parting lines L0-L2 and the horizontal line are calculated respectively.
Taking the parting line L0 as an example, the end points at the two ends of the parting line L0 can be calculated according to two oppositely arranged human body contour key points, and here, it is assumed that the coordinates of the end points at the two ends of the parting line L0 are (x 1, y 1) and (x 2, y 2) respectively, wherein (x 1, y 1) can be calculated according to the position information of the human body contour key points [3, 11], and (x 2, y 2) can be calculated according to the position information of the human body contour key points [4, 10 ].
The angle formed by the parting line L0 and the horizontal line is calculated as follows:
θ0=arctan(abs(y2-y1)/abs(x2-x1))
wherein θ 0 is an angle formed between the parting line L0 and the horizontal line, (x 1, y 1) is the position information of the first end point of the parting line L0, and (x 2, y 2) is the position information of the second end point of the parting line L0.
Based on the same mode, an included angle theta 1 formed by the parting line L1 and the horizontal line and an included angle theta 2 formed by the parting line L1 and the horizontal line can be obtained, and the included angle formed by the first line and the horizontal line is calculated according to the following mode:
θ=θ012
and θ is an included angle formed by the first line and the horizontal line.
In the embodiment of the application, the activity angle range is assumed to be smaller than 60 degrees, when θ is smaller than or equal to 60 degrees, the arm of the target person is determined not to horizontally extend, and at the moment, the human body detection frame is displayed in the next video frame directly according to the target person identification information of the first video frame; when θ >60 °, it is determined that the left arm of the target person is horizontally stretched, and at this time, the human body detection frame needs to be stretched so as to cover the horizontally stretched left arm.
Based on the same principle, the human body contour key points [55, 54, 53, 52] on the upper side of the right arm and the human body contour key points [47, 48, 49, 50] on the lower side of the right arm in fig. 7 can be obtained, the midpoints of every two oppositely arranged human body contour key points are obtained, branching bars R0, R1 and R2 representing the extending direction of the right arm are constructed according to the midpoints, the included angles formed by the branching bars R0, R1 and R2 and the horizontal line are obtained, the included angle between the first line representing the extending direction of the right arm and the horizontal line is obtained, and the included angle between the first line and the horizontal line is compared with the range of the moving angle, so that whether the right arm of the target person extends horizontally or not is determined.
The calculation manners of the included angles between the branching bars R0, R1 and R2 and the horizontal line and the included angles between the first line representing the extending direction of the right arm and the horizontal line are the same as those of the left arm in the above embodiment, and are not described herein again.
And adjusting the display mode of the human body detection frame in the video frame according to the horizontal extension condition of the left and right arms of the target person.
Specifically, as shown in fig. 8, a schematic diagram of live video character tracking in an embodiment of the present application; wherein, fig. (a) is a display screen of a first video frame, the first video frame includes a human body detection frame 401, and fig. (b) is a display screen of a next video frame immediately adjacent to the first video frame.
In one embodiment, if it is determined that the left arm of the target person is horizontally extended after the above steps, the human body detection frame 402 is displayed in the next video frame.
The human body detection frame 402 is a human body detection frame which is obtained by expanding the human body detection frame 401 along the left side and can cover the left arm after the target is horizontally stretched.
FIG. 9 is a schematic diagram showing a live video person tracking method according to another embodiment of the present application; wherein, the diagram (c) is a display picture of a first video frame, the first video frame includes a human body detection frame 401, and the diagram (d) is a display picture of a next video frame next to the first video frame.
In one embodiment, if it is determined that the left and right arms of the target person are horizontally extended after the above steps, the human body detection frame 403 is displayed in the next video frame.
The human body detection frame 403 is a human body detection frame which is obtained by expanding the human body detection frame 401 along the left side and the right side and can cover the arms of the target after being horizontally stretched.
It should be noted that the above embodiments are merely exemplary descriptions and should not limit the functions and scope of the present disclosure.
A person skilled in the art can detect the activity of other key parts (such as limbs) of the human body by combining the content of the application, and adjust the display mode of the target personal identification of the next video frame so as to realize the tracking of the live video object.
In the embodiment of the application, whether the size of the target person identifier needs to be adjusted is determined according to the stretching condition of key parts such as arms, etc., and when the target person moves greatly, the size of the target person identifier is adjusted according to the moving information of the target person in front and back video frames, so that the timely tracking of the target person in live video is realized, and the accuracy of human body contour detection is improved; and when the target person does not move greatly, the target person is identified in the next video frame directly according to the target person identification information of the previous video frame, and human body detection is not required to be carried out on each video frame, so that the tracking efficiency of the live video target person is improved.
As shown in fig. 10, the embodiment of the present application further provides a live video person tracking apparatus, which may be implemented as all or a part of a computer device through software, hardware, or a combination of both. The device comprises:
An identification information obtaining module 501, configured to obtain target person identification information of a first video frame and a first target person identifier; the target person identification information is used for determining the position of a target person in the first video frame;
a next video frame acquisition module 502, configured to acquire a next video frame immediately adjacent to the first video frame;
A position information obtaining module 503, configured to obtain position information of a human body contour key point of a human body key part of a target person in the first video frame; wherein the key parts of the human body are parts of the human body with the activity degree of freedom higher than a preset threshold value;
an angle information obtaining module 504, configured to obtain inclination angle information of a human body key part of a target person according to position information of a human body contour key point of the human body key part;
The identification module 505 is configured to determine, if the inclination angle information of the key human body part of the target person is within the preset activity angle range, a display position of the first target person identifier in a next video frame according to the target person identification information of the first video frame, and display the first target person identifier in the next video frame.
It should be noted that, when the live video person tracking apparatus provided in the foregoing embodiment performs the live video person tracking method, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the live video person tracking device and the live video person tracking method provided in the foregoing embodiments belong to the same concept, and detailed implementation processes are embodied in the method embodiments, which are not described herein again.
The embodiment provides an electronic device which can be used for executing all or part of steps of a live video person tracking method. For details not disclosed in this embodiment, please refer to the method embodiment of the present application.
Referring to fig. 11, fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the application. The electronic device 600 may be, but is not limited to being, a combination of one or more of a variety of servers, personal computers, notebook computers, smart phones, tablet computers, etc.
In a preferred embodiment of the present application, the electronic device 600 includes a memory 601, at least one processor 602, at least one communication bus 603, and a transceiver 604.
It should be understood by those skilled in the art that the configuration of the electronic device shown in fig. 11 is not limiting of the embodiments of the present application, and that the electronic device 600 may include more or less other hardware or software than shown, or a different arrangement of components, as well as a bus-type configuration and a star-type configuration.
In some embodiments, the electronic device 600 is a device capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and its hardware includes, but is not limited to, a microprocessor, an application specific integrated circuit, a programmable gate array, a digital processor, an embedded device, and the like. The electronic device 600 may also include a client device, including but not limited to any electronic product that can interact with a client by way of a keyboard, mouse, remote control, touch pad, or voice-controlled device, such as a personal computer, tablet, smart phone, digital camera, etc.
It should be noted that the electronic device 600 is only used as an example, and other electronic products that may be present in the present application or may be present in the future are also included in the scope of the present application by way of reference.
In some embodiments, the memory 601 stores a computer program that, when executed by the at least one processor 602, performs all or part of the steps of the live video person tracking method as in the first embodiment. The Memory 601 includes Read-Only Memory (ROM), programmable Read-Only Memory (PROM), erasable programmable Read-Only Memory (EPROM), one-time programmable Read-Only Memory (One-timeProgrammable Read-Only Memory, OTPROM), electrically erasable rewritable Read-Only Memory (EEPROM), compact disc Read-Only Memory (CD-ROM) or other optical disk Memory, magnetic disk Memory, tape Memory, or any other medium capable of being used for carrying or storing data.
In some embodiments, the at least one processor 602 is a Control Unit (Control Unit) of the electronic device 600, connects various components of the entire electronic device 600 using various interfaces and lines, and performs various functions of the electronic device 600 and processes data by running or executing programs or modules stored in the memory 601, and invoking data stored in the memory 601. For example, the at least one processor 602, when executing the computer program stored in the memory, implements all or part of the steps of the live video person tracking method described in embodiments of the present application; or to implement all or part of the functionality of a live video person tracking device. The at least one processor 602 may be comprised of integrated circuits, such as a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functionality, including one or more central processing units (CentralProcessing unit, CPU), microprocessors, digital processing chips, graphics processors, combinations of various control chips, and the like.
In some embodiments, the at least one communication bus 603 is arranged to enable connected communication between the memory 601 and the at least one processor 602 or the like.
The electronic device 600 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which are not described herein.
The embodiment provides a computer readable storage medium, on which a computer program is stored, where the instructions are adapted to be loaded by a processor and execute the live video person tracking method according to the first embodiment of the present application, and the specific execution process may refer to the specific description of the first embodiment, and will not be described herein.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The above-described apparatus embodiments are merely illustrative, wherein the components illustrated as separate components may or may not be physically separate, and the components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present application. Those of ordinary skill in the art will understand and implement the present application without undue burden.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (10)

1. A live video character tracking method, the method comprising:
Acquiring target person identification information of a first video frame and a first target person identification; the target person identification information is used for determining the position of a target person in the first video frame;
acquiring a next video frame adjacent to the first video frame;
acquiring position information of human body contour key points of human body key parts of a target person in the first video frame; wherein the key parts of the human body are parts of the human body with the activity degree of freedom higher than a preset threshold value; the number of the key points of the human body outline is at least two;
Acquiring inclination angle information of the human body key part of the target person according to the position information of the human body contour key point of the human body key part;
And if the inclination angle information of the human body key part of the target person is in the preset activity angle range, determining the display position of the first target person identifier in the next video frame according to the target person identification information of the first video frame, and displaying the first target person identifier in the next video frame.
2. The live video person tracking method of claim 1, further comprising the steps of:
If the inclination angle information of the key part of the human body of the target person exceeds a preset activity angle range, acquiring the position information of the position of the target person in the next video frame and a second target person identifier;
And displaying the second target personal identification at the position of the target person in the next video frame according to the position information of the position of the target person in the next video frame.
3. The live video person tracking method of claim 1, further comprising the steps of:
If the inclination angle information of the key part of the human body of the target person exceeds a preset activity angle range, acquiring a second target person identification; wherein the second target persona is different from the first target persona;
and determining the display position of a second target personal identification in a next video frame according to the target personal identification information of the first video frame, and displaying the second target personal identification in the next video frame.
4. A live video person tracking method as in any of claims 2-3, wherein the step of obtaining the second target person identification comprises:
determining moving direction information of the target person according to the inclination angle information of the key part of the human body of the target person;
and stretching the first target personal identification along the moving direction of the target person according to a preset size adjustment rule and the moving direction information of the target person to obtain a second target personal identification.
5. The method of claim 1, wherein the number of key points of the human body contour is at least two, and the step of acquiring the inclination angle information of the key parts of the human body of the target person comprises:
Constructing a first line for representing the extending direction of the key part of the human body according to the position information of the key points of the at least two human body contours;
and acquiring an included angle formed by the first line and the horizontal line.
6. The live video character tracking method of claim 5, wherein the step of obtaining an angle formed by the first line and a horizontal line comprises:
acquiring position information of a first endpoint and position information of a second endpoint of the first line;
θ0=arctan(abs(y2-y1)/abs(x2-x1))
Wherein θ 0 is an angle formed between the first line and the horizontal line, (x 1, y 1) is the position information of the first endpoint, and (x 2, y 2) is the position information of the second endpoint.
7. The live video character tracking method according to claim 4, wherein when the human contour key points are at least three human contour sampling points on a key part, the step of acquiring inclination angle information of the human key part of the target character comprises:
constructing a plurality of parting lines according to the position information of every two adjacent human body contour key points;
and obtaining and adding the included angle between each branching strip and the horizontal line to obtain the included angle formed by the first line and the horizontal line.
8. A live video character tracking apparatus, the apparatus comprising:
the identification information acquisition module is used for acquiring the target person identification information of the first video frame and the first target person identification; the target person identification information is used for determining the position of a target person in the first video frame;
a next video frame acquisition module, configured to acquire a next video frame immediately adjacent to the first video frame;
the position information acquisition module is used for acquiring position information of human body contour key points of human body key parts of the target person in the first video frame; wherein the key parts of the human body are parts of the human body with the activity degree of freedom higher than a preset threshold value; the number of the key points of the human body outline is at least two;
the angle information acquisition module is used for acquiring the inclination angle information of the human body key part of the target person according to the position information of the human body contour key point of the human body key part;
and the identification module is used for determining the display position of the first target personal identification in the next video frame according to the target personal identification information of the first video frame if the inclination angle information of the human body key part of the target personal is in the preset activity angle range, and displaying the first target personal identification in the next video frame.
9. An electronic device, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the live video person tracking method as claimed in any of claims 1 to 7.
10. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements a live video person tracking method as claimed in any of claims 1 to 7.
CN202210150699.4A 2022-02-18 2022-02-18 Live video character tracking method, device, equipment and storage medium Active CN114466218B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210150699.4A CN114466218B (en) 2022-02-18 2022-02-18 Live video character tracking method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210150699.4A CN114466218B (en) 2022-02-18 2022-02-18 Live video character tracking method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114466218A CN114466218A (en) 2022-05-10
CN114466218B true CN114466218B (en) 2024-04-23

Family

ID=81414610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210150699.4A Active CN114466218B (en) 2022-02-18 2022-02-18 Live video character tracking method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114466218B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390282A (en) * 2013-07-30 2013-11-13 百度在线网络技术(北京)有限公司 Image tagging method and device
CN109671142A (en) * 2018-11-23 2019-04-23 南京图玩智能科技有限公司 A kind of intelligence makeups method and intelligent makeups mirror
CN109740513A (en) * 2018-12-29 2019-05-10 青岛小鸟看看科技有限公司 A kind of analysis of operative action method and apparatus
CN109800685A (en) * 2018-12-29 2019-05-24 上海依图网络科技有限公司 The determination method and device of object in a kind of video
CN110536151A (en) * 2019-09-11 2019-12-03 广州华多网络科技有限公司 The synthetic method and device of virtual present special efficacy, live broadcast system
CN110852254A (en) * 2019-11-08 2020-02-28 杭州网易云音乐科技有限公司 Face key point tracking method, medium, device and computing equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10586102B2 (en) * 2015-08-18 2020-03-10 Qualcomm Incorporated Systems and methods for object tracking

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390282A (en) * 2013-07-30 2013-11-13 百度在线网络技术(北京)有限公司 Image tagging method and device
CN109671142A (en) * 2018-11-23 2019-04-23 南京图玩智能科技有限公司 A kind of intelligence makeups method and intelligent makeups mirror
CN109740513A (en) * 2018-12-29 2019-05-10 青岛小鸟看看科技有限公司 A kind of analysis of operative action method and apparatus
CN109800685A (en) * 2018-12-29 2019-05-24 上海依图网络科技有限公司 The determination method and device of object in a kind of video
CN110536151A (en) * 2019-09-11 2019-12-03 广州华多网络科技有限公司 The synthetic method and device of virtual present special efficacy, live broadcast system
CN110852254A (en) * 2019-11-08 2020-02-28 杭州网易云音乐科技有限公司 Face key point tracking method, medium, device and computing equipment

Also Published As

Publication number Publication date
CN114466218A (en) 2022-05-10

Similar Documents

Publication Publication Date Title
US11711668B2 (en) Localization determination for mixed reality systems
CN111314724A (en) Cloud game live broadcasting method and device
WO2022016915A1 (en) Advertisement information positioning method and corresponding apparatus therefor, advertisement information display method and corresponding apparatus therefor, device, and medium
US20220191557A1 (en) Method for displaying interaction data and electronic device
CN112969093B (en) Interactive service processing method, device, equipment and storage medium
CN112437318A (en) Content display method, device and system and storage medium
CN114387400A (en) Three-dimensional scene display method, display device, electronic equipment and server
CN114666671B (en) Live broadcast praise interaction method, device, equipment and storage medium
CN113490006A (en) Live broadcast interaction method and equipment based on bullet screen
US11694383B2 (en) Edge data network for providing three-dimensional character image to user equipment and method for operating the same
CN114374853A (en) Content display method and device, computer equipment and storage medium
CN111343409B (en) Method and system for initiating and synchronizing dynamic arrangement of multiple video windows
CN114466218B (en) Live video character tracking method, device, equipment and storage medium
CN110189364B (en) Method and device for generating information, and target tracking method and device
CN112612780A (en) Database operation method and device
CN114679591A (en) Video proportion switching method, device and medium for live broadcast room and computer equipment
CN115643445A (en) Interaction processing method and device, electronic equipment and storage medium
CN116366961A (en) Video conference method and device and computer equipment
CN113938698A (en) Display control method and device for live user data and computer equipment
KR20220125536A (en) System and operating method for providing mutual interaction service between virtual reality users and augmented reality users
CN112539752A (en) Indoor positioning method and indoor positioning device
JP6839771B2 (en) Video correction method and system by correction pattern analysis
CN112732384B (en) Data processing method and device
CN114816622B (en) Scene picture display method and device, electronic equipment and storage medium
CN111586261B (en) Target video processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant