CN117097932A - Video picture sharing method and device, electronic equipment and storage medium - Google Patents

Video picture sharing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117097932A
CN117097932A CN202310468639.1A CN202310468639A CN117097932A CN 117097932 A CN117097932 A CN 117097932A CN 202310468639 A CN202310468639 A CN 202310468639A CN 117097932 A CN117097932 A CN 117097932A
Authority
CN
China
Prior art keywords
terminal
target
video
video image
video picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310468639.1A
Other languages
Chinese (zh)
Inventor
郭东升
符修源
陈至钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202310468639.1A priority Critical patent/CN117097932A/en
Publication of CN117097932A publication Critical patent/CN117097932A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43076Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of the same content streams on multiple devices, e.g. when family members are watching the same movie on different devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4524Management of client data or end-user data involving the geographical location of the client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The disclosure relates to a video picture sharing method and device, an electronic device and a storage medium, wherein the method comprises the following steps: responding to an opening instruction of a video picture sharing function, acquiring position information of at least one second terminal from a cloud server, wherein the second terminal is a terminal which is used for acquiring video images containing target objects and sharing the video images; displaying an AR video picture containing an AR object according to the position information of the second terminal and a first video image acquired by the first terminal in real time, wherein the AR object comprises a position mark for representing the relative position between at least one second terminal and a target object; and responding to the selection operation of any position identifier in the AR object, acquiring a target video image acquired by a second terminal corresponding to the selected position identifier from the cloud server, and switching and displaying a target live-action video picture corresponding to the target video image. The embodiment of the disclosure can realize video picture sharing and promote the viewing experience of a user for viewing the target object.

Description

Video picture sharing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a video picture sharing method and device, an electronic device and a storage medium.
Background
In some offline audience scenes, such as ball games, stage plays, etc., where the number of people is excessive, some spectators may be in remote geographic locations, such as remote from the stage, remote from the court, etc., making the viewing experience worse.
Disclosure of Invention
The disclosure provides a video picture sharing technical scheme.
According to a first aspect of the present disclosure, there is provided a video picture sharing method, including: is applied to a first terminal, and comprises: responding to an opening instruction of a video picture sharing function, and acquiring position information of at least one second terminal from a cloud server, wherein the second terminal is a terminal which is used for acquiring video images containing target objects and sharing the video images; displaying an AR video picture containing an AR object according to the position information of the at least one second terminal and the first video image acquired by the first terminal in real time, wherein the AR object comprises a position mark for representing the relative position between the at least one second terminal and the target object; and responding to the selection operation of any position identifier in the AR object, acquiring a target video image acquired by a second terminal corresponding to the selected position identifier from the cloud server, and switching and displaying a target live-action video picture corresponding to the target video image.
In a possible implementation manner, the displaying an AR video frame including an AR object according to the location information of the at least one second terminal and the first video image acquired by the first terminal in real time includes: generating the AR object according to the position information of the second terminal, and detecting an object area where the target object is located in a first live-action video picture corresponding to the first video image; and adding the AR objects around the target object in the first live-action video picture according to the object area where the target object is located, obtaining the AR video picture and displaying the AR video picture.
In a possible implementation manner, before the responding to the starting instruction of the video frame sharing function and obtaining the position information of at least one second terminal from the cloud server, the method further includes: responding to an opening instruction of a video picture sharing function, and acquiring a first video image acquired by the first terminal in real time; and transmitting the first video image to the cloud server, so that the cloud server feeds back the position information of the at least one second terminal to the first terminal under the condition that the first live-action video picture corresponding to the first video image contains the target object.
In one possible implementation, the method further includes: according to the preset frequency, the position information of the first terminal is determined in real time and sent to the cloud server, so that the cloud server pushes the position information of the first terminal to the at least one second terminal.
In one possible implementation manner, after the target live-action video picture corresponding to the target video image is displayed in a switching manner, the method further includes: in response to receiving a sharing stopping instruction sent by the cloud server, stopping displaying the target live-action video picture, and acquiring the position information of at least one second terminal from the cloud server again to generate an AR video picture and display the AR video picture; the sharing stopping instruction is used for representing that the target object is not contained in the target live-action video picture corresponding to the target video image currently, or that the second terminal corresponding to the selected position identifier stops transmitting the target video image to the cloud server.
According to a second aspect of the present disclosure, there is provided a video frame sharing method applied to a cloud server, including: in response to receiving a first video image transmitted by a first terminal with a video picture sharing function started, detecting whether a first live-action video picture corresponding to the first video image contains a target object or not; transmitting the position information of at least one second terminal to the first terminal under the condition that the first live-action video picture contains the target object, so that the first terminal feeds back the selected second terminal to the cloud server according to the position information of the at least one second terminal, wherein the second terminal is a terminal which is used for collecting and sharing video images containing the target object; and transmitting the target video image acquired by the selected second terminal to the first terminal in response to receiving the selected second terminal fed back by the first terminal.
In one possible implementation, the method further includes: receiving the position information of the first terminal sent by the first terminal; and pushing the position information of the first terminal to the at least one second terminal under the condition that the target object is contained in the first live-action video picture, so that the at least one second terminal updates the position identification on the respectively displayed AR object according to the position information of the first terminal.
In one possible implementation, the method further includes: receiving a second video image transmitted by the at least one second terminal in real time and position information of the at least one second terminal, and respectively detecting whether a second live-action video picture corresponding to the second video image contains the target object or not, wherein the second video image comprises a target video image acquired by the selected second terminal; and stopping transmitting the target video image to the first terminal under the condition that the target object is not contained in the target live-action video picture corresponding to the target video image currently or the selected second terminal stops transmitting the target video image, and sending a sharing stopping instruction to the first terminal so as to instruct the first terminal to stop displaying the target live-action video picture corresponding to the target video image.
According to a third aspect of the present disclosure, there is provided a video picture sharing apparatus applied to a first terminal, including: the information acquisition module is used for responding to an opening instruction of a video picture sharing function and acquiring position information of at least one second terminal from the cloud server, wherein the second terminal is a terminal which is used for acquiring video images containing target objects and sharing the video images; the display module is used for displaying an AR video picture containing an AR object according to the position information of the at least one second terminal and the first video image acquired by the first terminal in real time, wherein the AR object comprises a position mark used for representing the relative position between the at least one second terminal and the target object; and the switching display module is used for responding to the selection operation of any position identifier in the AR object, acquiring a target video image acquired by a second terminal corresponding to the selected position identifier from the cloud server, and switching and displaying a target live-action video picture corresponding to the target video image.
In a possible implementation manner, the displaying an AR video frame including an AR object according to the location information of the at least one second terminal and the first video image acquired by the first terminal in real time includes: generating the AR object according to the position information of the second terminal, and detecting an object area where the target object is located in a first live-action video picture corresponding to the first video image; and adding the AR objects around the target object in the first live-action video picture according to the object area where the target object is located, obtaining the AR video picture and displaying the AR video picture.
In a possible implementation manner, before the responding to the starting instruction of the video frame sharing function and obtaining the position information of at least one second terminal from the cloud server, the method further includes: the first video image acquisition module is used for responding to an opening instruction of a video picture sharing function and acquiring a first video image acquired by the first terminal in real time; the first video image transmission module is used for transmitting the first video image to the cloud server, so that the cloud server feeds back the position information of the at least one second terminal to the first terminal under the condition that the first live-action video picture corresponding to the first video image contains the target object.
In one possible implementation, the apparatus further includes: the position information determining module is used for determining the position information of the first terminal in real time according to the preset frequency and sending the position information of the first terminal to the cloud server so that the cloud server pushes the position information of the first terminal to the at least one second terminal.
In one possible implementation manner, after the switching display of the target live-action video frame corresponding to the target video image, the apparatus further includes: the sharing stop instruction receiving module is used for stopping displaying the target live-action video picture in response to receiving the sharing stop instruction sent by the cloud server, and obtaining the position information of at least one second terminal from the cloud server again to generate an AR video picture and display the AR video picture; the sharing stopping instruction is used for representing that the target object is not contained in the target live-action video picture corresponding to the target video image currently, or that the second terminal corresponding to the selected position identifier stops transmitting the target video image to the cloud server.
According to a fourth aspect of the present disclosure, there is provided a video picture sharing apparatus applied to a cloud server, including: the receiving module is used for responding to a first video image transmitted by a first terminal which has started a video picture sharing function, and detecting whether a first live-action video picture corresponding to the first video image contains a target object or not; the information sending module is used for sending the position information of at least one second terminal to the first terminal under the condition that the first live-action video picture contains the target object, so that the first terminal feeds back the selected second terminal to the cloud server according to the position information of the at least one second terminal, and the second terminal is a terminal which is acquiring and sharing video images containing the target object; and the transmission module is used for responding to the selected second terminal fed back by the first terminal and transmitting the target video image acquired by the selected second terminal to the first terminal.
In one possible implementation, the apparatus further includes: the receiving module is used for receiving the position information of the first terminal sent by the first terminal; the pushing module is used for pushing the position information of the first terminal to the at least one second terminal under the condition that the first live-action video picture contains the target object, so that the at least one second terminal updates the position identification on the respectively displayed AR object according to the position information of the first terminal.
In one possible implementation, the apparatus further includes: the detection module is used for receiving a second video image transmitted by the at least one second terminal in real time and the position information of the at least one second terminal, and respectively detecting whether a second live-action video picture corresponding to the second video image contains the target object or not, wherein the second video image comprises the target video image acquired by the selected second terminal; and the transmission stopping module is used for stopping transmitting the target video image to the first terminal and sending a sharing stopping instruction to the first terminal so as to instruct the first terminal to stop displaying the target live-action video picture corresponding to the target video image under the condition that the target object is not included in the target live-action video picture corresponding to the target video image currently or the selected second terminal stops transmitting the target video image.
According to a fifth aspect of the present disclosure, there is provided an electronic device comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to perform the above method.
According to a sixth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, after a user starts a video picture sharing function, the position information of a second terminal which is acquiring and sharing a video image containing a target object (such as a stage, a court, etc.) is acquired from a cloud server, then an AR video picture containing an AR object is displayed based on the position information and a first video image acquired in real time by a first terminal, and the AR object includes a position identifier for representing the relative position between at least one second terminal and the target object, so that the user can conveniently view the relative position of each second terminal relative to the target object in real time, and then by selecting any position identifier, the method is equivalent to selecting any second terminal, and the target video image acquired by the selected second terminal is acquired from the cloud server and the target live-action video picture corresponding to the target video image is displayed.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
Fig. 1 illustrates an application scenario diagram of a video picture sharing method according to an embodiment of the present disclosure.
Fig. 2 shows a flowchart of a video picture sharing method according to an embodiment of the present disclosure.
Fig. 3 shows a flowchart of a video picture sharing method according to an embodiment of the present disclosure.
Fig. 4 shows a block diagram of a video picture sharing apparatus according to an embodiment of the present disclosure.
Fig. 5 shows a block diagram of a video picture sharing apparatus according to an embodiment of the present disclosure.
Fig. 6 illustrates a block diagram of an electronic device 1900 according to an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
It should be understood that the terms "first," "second," and the like, as used in this disclosure, are used to distinguish between different objects and are not used to describe a particular order. The terms "comprises" and "comprising" when used in this disclosure are taken to specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Furthermore, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
Fig. 1 shows a schematic view of an application scenario of a video picture sharing method according to an embodiment of the present disclosure, as shown in fig. 1, in the application scenario, a, b, and c respectively represent terminals that have turned on a video picture function and are sharing video images including the target object, and a video picture sharing process corresponding to the video picture sharing method according to the embodiment of the present disclosure is briefly described below taking a terminal a to view a live-action video picture taken by another terminal as an example:
firstly, after a terminal a starts a video picture sharing function, the terminal a starts to acquire video images in real time and determine position information in real time, and the video images acquired in real time and the position information determined in real time are uploaded to a cloud server; the cloud server detects whether a video picture corresponding to the video image uploaded by the terminal a contains a target object, if so, the position information of the terminal b and the terminal c can be sent to the terminal a, and the position information can represent the actual position of the terminal in an application scene;
Secondly, after receiving the position information of the terminal b and the terminal c sent in the cloud server, the terminal a can display an AR video picture containing the AR object according to the position information of the terminal b and the terminal c and the currently acquired video image, wherein the AR object contains the position identifiers corresponding to the terminal b and the terminal c respectively, and the position identifiers are used for representing the relative positions between the terminal b and the terminal c and the target object respectively; it should be understood that the embodiments of the present disclosure are not limited to the shape, color, size, etc. of the AR object and the location identifier, for example, the AR object may be disc-shaped, fan-shaped, ring-shaped, etc., the location identifier may be drop-shaped, dot-shaped, etc., and the location identifier may further include a user avatar, etc.;
then, after the terminal a displays the AR video frame, the user holding the terminal a may click on any position identifier, which is equivalent to selecting any terminal sharing the video frame, for example, if the user selects the terminal c, the cloud server may send the device number corresponding to the terminal c to the cloud server, the cloud server may transmit the video image acquired by the terminal c indicated by the device number to the terminal a in real time, and the terminal a may display the corresponding video frame based on the video image acquired by the terminal c in real time. Wherein the device number may be used to distinguish between different terminals and location information of the terminals.
It should be understood that, the above-mentioned terminal b and terminal c will also display the AR object, the user holding the terminal b may select to view the video picture captured by the terminal a or the terminal c through the displayed position identifier on the AR object, and the user holding the terminal c may also select to view the video picture captured by the terminal a or the terminal b through the displayed position identifier on the AR object.
It should be understood that the user may start the video picture sharing function by holding an application program installed on the terminal and having the video picture sharing function, that is, by applying the video picture sharing method in the present disclosure, and the application program may include, for example, a social application, a camera application, an AR application, and the like, which is not limited to the embodiment of the present disclosure.
One skilled in the art may develop an application program for implementing the video frame sharing method in the present disclosure by using a software development technology known in the art, and may further provide related controls for stopping watching video frames captured by other terminals or stopping operations such as a video frame sharing function in the application program, so as to meet various operation requirements of users.
It should be noted that the above application scenario is a scenario provided by the embodiments of the present disclosure, and in fact, any user in the scenario may share a video picture captured by the application software with a video picture sharing function installed on the terminal, or may also view a video picture captured by any other terminal that has started a video picture sharing function. Any terminal in the video picture sharing method can be used as a sharer for sharing video pictures shot by the terminal, and also can be used as a viewer for watching video pictures shot by other terminals.
Fig. 2 shows a flowchart of a video picture sharing method according to an embodiment of the present disclosure, where the video picture sharing method is applied to a first terminal, which may be a terminal device such as a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, a wearable device, etc., by a processor of the first terminal invoking computer readable instructions stored in a memory. As shown in fig. 2, the video picture sharing method includes:
In step S11, in response to an instruction to start the video frame sharing function, position information of at least one second terminal, which is a terminal that is capturing and sharing a video image including a target object, is acquired from the cloud server.
As described above, the user can start the video sharing function through the application program with the video sharing function installed on the first terminal, and after the video sharing function is started, the position information of the other second terminal that is acquiring the video image including the target object and sharing the video image can be acquired from the cloud server.
It should be understood that each second terminal may be provided with an image sensor such as a camera to capture video images, i.e. to capture a target object. A video image may be understood as video data collected by an image sensor for transmission, the video image may be represented in the form of a video stream, and a video frame may be understood as a continuous frame of multiple frames of video images visible to the naked eye, where those skilled in the art may use video processing techniques known in the art to convert multiple frames of video images into video frames and display them, or decode them into video frames and display them, and the embodiments of the present disclosure are not limited.
The location information may represent an actual spatial location of the terminal, for example, may be location information determined by a global positioning system (Global Positioning System, GPS) of the terminal, or may also be location information determined by performing visual positioning based on an acquired video image, where a determination manner of the location information is not limited in the embodiments of the present disclosure.
In one possible implementation manner, before the position information of at least one second terminal is obtained from the cloud server in response to an instruction for starting the video frame sharing function, the method further includes: responding to an opening instruction of a video picture sharing function, and acquiring a first video image acquired by a first terminal in real time; and transmitting the first video image to the cloud server, so that the cloud server feeds back the position information of at least one second terminal to the first terminal under the condition that the cloud server detects that the first live-action video picture corresponding to the first video image contains the target correspondence. By the method, the AR video picture can be displayed by returning the position information of the second terminal when the user shoots the target object, the AR object displayed along with the target object is presented to the user, and user interaction experience is improved.
After the first terminal starts the video picture function, the first terminal can acquire the first video image in real time and upload the first video image acquired in real time to the cloud server, the cloud server can identify and detect the first video image acquired in real time by the first terminal so as to detect whether a first live-action video picture corresponding to the first video image contains a target object, namely, judge whether the first terminal shoots the target object, and then return the position information of at least one second terminal to the first terminal under the condition that the first terminal detects that the first live-action video picture corresponding to the first video image contains the target object or the first terminal shoots the target object.
In one possible implementation manner, after the first terminal starts the video frame sharing function, the first terminal may further determine its own location information in real time, and upload the location information determined in real time to the cloud server, where the cloud server may further determine, according to the location information of the first terminal, whether the first terminal is within a specified geographic range (for example, within a theatre range, within a competition field range, etc.), and if the first terminal is within the specified geographic range, return the location information of at least one second terminal to the first terminal. Alternatively, when it is detected that the first terminal is capturing the target object (that is, when it is detected that the first live-action video frame corresponding to the first video image includes the target object) and the first terminal is within the specified geographic range, the location information of at least one second terminal may be returned to the first terminal.
When the cloud server returns the position information of at least one second terminal to the first terminal, the device numbers corresponding to the second terminals can be returned at the same time, and the device numbers can be used for distinguishing different second terminals and the position information of the different second terminals, so that target video images acquired by the selected second terminals can be acquired from the cloud server conveniently.
As described above, after the first terminal starts the video frame sharing function, the first terminal may collect the first video image in real time and determine its own position information in real time, and upload the first video image collected in real time and the position information determined in real time to the cloud server. In consideration of the above, there may be a case where the first terminal does not include the target object (i.e., the first terminal may not currently be shooting the target object) in the first live-action video frame corresponding to the first video image uploaded by the first terminal, and/or the first terminal is not within the specified geographic range.
Based on this, in one possible implementation manner, if the cloud server detects that the first live-action video frame corresponding to the first video image acquired by the first terminal does not include the target object, and/or the first terminal is not within the specified geographic range, the cloud server may not send the position information of the at least one second terminal to the first terminal, or stop sending the position information of the at least one second terminal to the first terminal; the first terminal may display a first live-action video picture corresponding to the first video image currently collected by the first terminal when the cloud server does not return the position information of the at least one second terminal, until the cloud server detects that the first live-action video picture corresponding to the first video image collected by the first terminal already contains the target object, and/or returns the position information of the at least one second terminal to the first terminal when the first terminal is already within the specified geographical range.
In step S12, according to the position information of the at least one second terminal and the first video image acquired by the first terminal in real time, an AR video frame including an AR object is displayed, where the AR object includes a position identifier for characterizing a relative position between the at least one second terminal and the target object.
In one possible implementation manner, displaying an AR video frame including an AR object according to the location information of at least one second terminal and the first video image acquired by the first terminal in real time may include: generating an AR object according to the position information of the second terminal; and adding an AR object in the first live-action video picture corresponding to the first video image to obtain an AR video picture and displaying the AR video picture. The AR object may be added at a specified image position in the first live-action video frame, for example, an upper left corner, a lower right corner, or the like, or may be randomly added at a random image position in the first live-action video frame, which is not limited to the embodiment of the present disclosure.
Considering that the photographing angle of view of the first terminal may be changed, or that the image position of the target object in the first live-action video picture is changed, if the AR object is displayed at a specified image position or a random image position in the first live-action video picture, there may be a case where the target object is blocked. In one possible implementation manner, displaying an AR video frame including an AR object according to the location information of at least one second terminal and the first video image acquired by the first terminal in real time includes: generating an AR object according to the position information of the second terminal, and detecting an object area where a target object is located in a first live-action video picture corresponding to a first video image; according to the object area where the target object is located, adding AR objects around the target object in the first live-action video picture, obtaining an AR video picture and displaying the AR video picture. By the method, the AR object can be displayed around the target object according to the image position of the target object, or the AR object is displayed along with the target object, so that better AR interaction experience is obtained.
As described above, the AR object may be, for example, a disc shape, a sector shape, a ring shape, or the like; the location identifier may be a water drop, a dot, etc., and the location identifier may further include a user avatar, etc. It should be understood that the embodiments of the present disclosure are not limited to the shape, color, size, etc. of the AR object and the location identifier. The AR object can also display a position identifier representing the relative position between the first terminal and the target object, and can also display an object identifier corresponding to the target object, so that a user can conveniently and intuitively check the relative position of the AR object relative to the target object, and can conveniently and intuitively check the relative positions of other terminals relative to the target object or relative to the first terminal.
The AR object is generated according to the position information of the second terminal, which can be understood as that the AR object including the position identifier is drawn according to the actual spatial position of the second terminal and the actual spatial position of the target object; it should be understood that the actual spatial position of the target object may be known, after the actual spatial position of the target object and the actual spatial position of each second terminal are known, the relative position of each second terminal with respect to the target object may be determined, then the display position of the position identifier corresponding to each second terminal on the AR object may be determined based on the relative position, and the position identifier may be added to the AR object according to the display position of each position identifier, so as to obtain the AR object.
The object identifier for indicating the target object may be fixedly set at a default position of the AR object, for example, may be set at a middle position, a top position, a bottom position, and the like, and based on the default position of the object identifier, other position identifiers may determine a display position on the AR object according to a relative position of each second terminal with respect to the target object, and add the position identifier to the AR object according to a display position of each position identifier, so that the position identifier may represent a relative position of each second terminal with respect to the target object. It should be appreciated that the addition of the location identity of the first terminal to the AR object may be implemented with reference to the above-described implementation of adding the location identity of the second terminal to the AR object.
One skilled in the art may use a target detection technology known in the art to detect an object area where a target object is located in a first live-action video frame corresponding to a first video image, which is not limited in this embodiment of the disclosure. After the object region where the target object is located is detected, AR objects may be added around the target object in the first live-action video picture, so as to obtain an AR video picture and display the AR video picture. The AR object may be added around the target object in the first live-action video frame, for example, the AR object may be added at an image position such as a left side, a right side, an upper side, or a lower side of the target object, which is not limited to the embodiment of the present disclosure.
It should be understood that the actual spatial positions of the second terminals and the first terminals may change at any time, so that each of the second terminals and the first terminals may determine its own position information according to a preset frequency (for example, once every 1 minute) and upload the position information to the cloud server, the cloud server may push the position information uploaded by each of the second terminals to the first terminal in real time, and the first terminal may update the position identifier in the AR object according to the position information pushed by the cloud server in real time and the position information determined by itself in real time; and the cloud server can push the position information uploaded by the first terminal to each second terminal so as to update the position identification displayed on the AR object in each second terminal.
In one possible implementation manner, the first terminal may determine the location information of the first terminal in real time according to a preset frequency and send the location information of the first terminal to the cloud server, so that the cloud server pushes the location information of the first terminal to at least one second terminal. It should be understood that, each second terminal may also determine, in real time, the location information of the second terminal itself according to a preset frequency, and send the location information of the second terminal itself to the cloud server, so that the cloud server pushes the location information of the second terminal to the first terminal, so as to update, in real time, the location identifier on the AR object displayed in the first terminal. By the method, the AR objects displayed in each terminal with the video picture starting function can be updated in real time, so that a user can conveniently select to watch the live-action video picture currently shot by any other terminal.
In step S13, in response to a selection operation for any one of the location identifiers of the AR objects, a target video image acquired by the second terminal corresponding to the selected location identifier is acquired from the cloud server, and a target live-action video picture corresponding to the target video image is switched and displayed.
It should be understood that when the user selects any one of the location identifiers, this means that the user desires to view the live-action video picture taken by the second terminal corresponding to the location identifier, and the second terminal corresponding to the selected location identifier, that is, the selected second terminal. As described above, when the cloud server returns the location information of at least one second terminal to the first terminal, the device numbers corresponding to the second terminals may also be returned at the same time, where the device numbers may be used to distinguish different second terminals and location information of different second terminals.
Based on the above, after the user selects any position identifier, the device number of the second terminal corresponding to the selected position identifier may be sent to the cloud server, and the cloud server may transmit the target video image acquired by the second terminal corresponding to the device number to the first terminal based on the device number sent by the first terminal, that is, the target video image acquired by the second terminal corresponding to the selected position identifier is acquired from the cloud server.
As described in step S12 above, before the user does not select any location identifier, the display screen of the first terminal displays an AR video picture, and after the user selects any location identifier, the display screen may be switched to display a target live-action video picture corresponding to the target video image obtained from the cloud server, so as to view the target live-action video picture captured by the selected second terminal. In one possible implementation, a prompt may be displayed in an interface displaying the target live-action video picture, where the prompt may be used to prompt a user holding the first terminal to currently view the target live-action video picture taken by the selected second terminal.
It should be understood that if the first terminal has switched to display the target live-action video frame, the user will not shoot the target object with a high probability in the process of watching the target live-action video frame, that is, the first live-action video frame corresponding to the first video image collected by the first terminal does not include the target object, so that the first terminal can pause collecting the first video image and determining its own position information after having switched to display the target live-action video frame, and pause transmitting the first video image and its own position information to the cloud server, which can be understood as that the first terminal pauses the video frame sharing function.
After the first terminal has displayed the target live-action video picture, the user can also provide a virtual key in the application program or a physical key set by the first terminal, send a stop instruction for stopping watching the target live-action video picture to the first terminal, and the first terminal can respond to the stop instruction to stop acquiring the target video image from the cloud server and stop displaying the target live-action video picture.
In one possible implementation manner, after the user sends a stop instruction to stop watching the target live-action video frame through the virtual key or the physical key, a prompt popup window may be displayed in the interface of the first terminal, where the prompt popup window may be used to ask the user whether to continue to start the video frame sharing function, and if the user chooses not to continue to start the video frame sharing function, the steps of the video frame sharing method in the embodiment of the disclosure are not executed, that is, the video frame sharing function is exited.
If the user selects to continue to start the video frame sharing function, the first terminal may restart to collect the first video image, determine its own position information, upload the position information to the cloud server, and acquire the position information of at least one second terminal from the cloud server according to the steps S11 to S12, so as to redisplay the AR video frame. Of course, the user may continue to select any location identifier to view the live-action video frames shot by other second terminals through step S13, or may not select any location identifier and display the AR video frames, which is not limited in the embodiment of the present disclosure.
In some cases, the cloud server may further send a sharing stop instruction to the first terminal to notify the first terminal to stop displaying the target live-action video frame, for example, the live-action video frame corresponding to the target video image acquired by the selected second terminal does not include the target object, that is, the selected second terminal is not shooting the target object; or the selected second terminal has stopped transmitting the target video image to the cloud server, that is, the second terminal has stopped the video image sharing function, for example, the second terminal closes an application program with the video sharing function, the second terminal has selected to switch and display the live-action video images shot by other terminals, and the like. It can be understood that in these cases, the cloud server does not need to transmit the target video image acquired by the second terminal to the first terminal any more, so that the sharing stop instruction can be sent to the first terminal, and the first terminal can respond to the sharing stop instruction to perform corresponding processing.
In one possible implementation manner, after switching to display the target live-action video picture corresponding to the target video image, the method further includes: and stopping displaying the target live-action video picture in response to receiving a sharing stopping instruction sent by the cloud server, and acquiring the position information of at least one second terminal from the cloud server again to generate an AR video picture and display the AR video picture, wherein the sharing stopping instruction is used for representing that the target object is not contained in the target live-action video picture corresponding to the target video picture currently, or the second terminal corresponding to the selected position mark stops transmitting the target video picture to the cloud server. By the method, the situation that the selected second terminal is not shooting the target object or the selected second terminal stops transmitting the target video image to the cloud server can be effectively treated, and interaction experience is improved.
After receiving the sharing stop instruction sent by the cloud server, the first terminal may display a prompt pop-up window to prompt the user holding the first terminal that the second terminal does not shoot the target object any more, or prompt the second terminal to stop the video image sharing function, where the prompt pop-up window may also ask the user whether to continue to start the video image sharing function, and if the user chooses to continue to start the video image sharing function, the AR video image may be redisplayed according to the steps S11 to S13, and the live-action video images shot by other second terminals may be redisplayed. Of course, the user may not be asked to directly redisplay the AR video frames according to the steps S11 to S13, and choose to watch the live-action video frames shot by the other second terminals again, which is not limited in the embodiments of the present disclosure.
In the embodiment of the disclosure, after a user starts a video picture sharing function, the position information of a second terminal which is acquiring and sharing a video image containing a target object (such as a stage, a court, etc.) is acquired from a cloud server, then an AR video picture containing an AR object is displayed based on the position information and a first video image acquired in real time by a first terminal, and the AR object includes a position identifier for representing the relative position between at least one second terminal and the target object, so that the user can conveniently view the relative position of each second terminal relative to the target object in real time, and then by selecting any position identifier, the method is equivalent to selecting any second terminal, and the target video image acquired by the selected second terminal is acquired from the cloud server and the target live-action video picture corresponding to the target video image is displayed.
Fig. 3 shows a flowchart of a video picture sharing method according to an embodiment of the present disclosure, which is applied to a cloud server. As shown in fig. 3, the video picture sharing method includes:
in step S21, in response to receiving the first video image transmitted by the first terminal that has turned on the video picture sharing function, it is detected whether the first live-action video picture corresponding to the first video image includes the target object.
Among other things, image processing techniques known in the art, such as object detection, image recognition, etc., may be used to detect whether the first live-action video picture contains the target object, which is not a limitation of embodiments of the present disclosure.
In step S22, if the first live-action video frame includes the target object, the location information of at least one second terminal is sent to the first terminal, so that the first terminal feeds back the selected second terminal to the cloud server according to the location information of the at least one second terminal.
As described above, after the first terminal receives the position information of at least one second terminal sent by the cloud server, the first terminal may display an AR video frame including an AR object according to the position information of at least one second terminal and the first video image acquired in real time by the first terminal, a user holding the first terminal may select any position identifier, that is, select any second terminal, where the second terminal is a terminal that is acquiring a video image including a target object and sharing the video image, and the selected second terminal may be understood as a terminal that is capturing a live-action video frame by the second terminal that the user holding the first terminal desires to view.
As described above, after the user holding the first terminal selects any position identifier, that is, after any second terminal is selected, the first terminal may send the device number of the selected second terminal to the cloud server, so as to implement feedback of the selected second terminal to the cloud server.
It should be understood that the first live-action video picture corresponding to the first video image acquired by the first terminal may or may not include the target object; under the condition that the first video image is detected to not contain the target object, the cloud server can not send the position information of at least one second terminal to the first terminal or stop sending the position information of at least one second terminal to the first terminal until the first live-action video image is detected to contain the target object.
When the cloud server returns the position information of at least one second terminal to the first terminal, the cloud server can also return the device numbers corresponding to the second terminals at the same time, and the device numbers can be used for distinguishing different second terminals and the position information of the different second terminals.
In step S23, in response to receiving the selected second terminal fed back by the first terminal, the target video image acquired by the selected second terminal is transmitted to the first terminal.
As described above, when the cloud server returns the location information of at least one second terminal to the first terminal, the device numbers corresponding to the second terminals can be returned at the same time, the device numbers can be used for distinguishing different second terminals and the location information of different second terminals, the first terminal can send the device number of the selected second terminal to the cloud server, feedback of the selected second terminal to the cloud server is achieved, and the cloud server can transmit the target video image acquired by the second terminal corresponding to the device number to the first terminal based on the device number fed back by the first terminal, namely, the transmission of the target video image acquired by the selected second terminal to the first terminal is achieved.
In the embodiment of the disclosure, the cloud server pushes the position information of the other second terminals which are acquiring the video images containing the target object and sharing the video images to the first terminal which has any started the video image sharing function and is shooting the target object, so that a user holding the first terminal can select any interested second terminal based on the position information, the cloud server can transmit the target video image acquired by the second terminal selected by the user to the first terminal, and the first terminal can switch and display the target live-action video image corresponding to the target video image, thereby realizing video image sharing among the terminals, namely, any terminal can become a sharer sharing the live-action video image shot by the user and also become a viewer watching the live-action video image shot by other terminals, and being beneficial to improving the watching experience of watching the target object.
As described above, the first terminal may further determine its own location information in real time and upload the location information to the cloud server, and in one possible implementation manner, the method further includes: receiving the position information of the first terminal sent by the first terminal; and pushing the position information of the first terminal to at least one second terminal under the condition that the first live-action video picture contains the target object, so that the at least one second terminal updates the position identification on the respectively displayed AR object according to the position information of the first terminal. In this way, video picture sharing between the terminals is facilitated.
As described above, the first video sharing frame corresponding to the first video image uploaded by the first terminal after the first terminal starts the video frame sharing function may not include the target object, that is, the first terminal is not shooting the target object, and if the first terminal is not shooting the target object, the position information of the first terminal is not pushed to other second terminals that are acquiring and sharing the video image including the target object. And if the first live-action video picture contains the target object, namely the first terminal shoots the target object, pushing the position information of the first terminal to at least one second terminal.
It should be understood that, each second terminal may display the AR object by referring to the manner of displaying the AR video image in the first terminal, that is, when the first terminal is shooting the target object, the first terminal is opposite to any second terminal, that is, a terminal that is capturing the video image containing the target object and sharing the video image, so the cloud server may push the location information of the first terminal to each second terminal to update the location identifier on the AR object displayed in each second terminal.
As described above, the first terminal may upload its own location information to the cloud server, and in one possible implementation, the cloud server may further determine whether the first terminal is within a specified geographic range (e.g. within a theatre range, within a competition field range, etc.) according to the location information of the first terminal, and if the first terminal is within the specified geographic range, may return the location information of at least one second terminal to the first terminal. Alternatively, when it is detected that the first terminal is capturing the target object (that is, when it is detected that the first live-action video frame corresponding to the first video image includes the target object) and the first terminal is within the specified geographic range, the location information of at least one second terminal may be returned to the first terminal.
In a possible implementation manner, when the cloud server detects that the first live-action video frame corresponding to the first video image acquired by the first terminal does not include the target object, and/or the first terminal is not within the specified geographic range, the cloud server may not send the position information of the at least one second terminal to the first terminal, or stop sending the position information of the at least one second terminal to the first terminal.
As described above, the target object is not included in the target live-action video frame currently corresponding to the target video image acquired by the selected second terminal, that is, the selected second terminal is not shooting the target object; or the selected second terminal stops transmitting the target video image to the cloud server, that is, the second terminal stops the video image sharing function, for example, the second terminal closes an application program with the video sharing function, the second terminal selects to switch and display live-action video images shot by other terminals, and the like, in which case the cloud server does not need to transmit the target video image acquired by the selected second terminal to the first terminal, then the cloud server can send a sharing stop instruction to the first terminal, and the first terminal can respond to the sharing stop instruction to perform corresponding processing. In one possible implementation, the method further includes:
Receiving a second video image transmitted by at least one second terminal in real time and position information of the at least one second terminal, and respectively detecting whether a second live-action video picture corresponding to the second video image contains a target object or not, wherein the second video image comprises a target video image acquired by a selected second terminal;
and stopping transmitting the target video image to the first terminal under the condition that the target object is not contained in the target live-action video picture corresponding to the target video image currently or the selected second terminal stops transmitting the target video image, and sending a sharing stopping instruction to the first terminal so as to instruct the first terminal to stop displaying the target live-action video picture corresponding to the target video image.
It should be understood that after the video picture sharing function is started, each second terminal may collect the second video image in real time and determine its own position information in real time, and then upload the second video image collected in real time and its own position information to the cloud server, where the cloud server may detect the second video image uploaded by each second terminal, so as to detect whether the second live-action video picture corresponding to the uploaded second video image includes the target object.
The method for detecting whether the first live-action video picture corresponding to the first video image includes the target object may be referred to above, so as to detect whether the second live-action video picture corresponding to the at least one second video image includes the target object, which is not limited in this embodiment of the disclosure.
In the embodiment of the disclosure, the situation that the selected second terminal does not shoot the target object any more or the selected second terminal stops transmitting the target video image to the cloud server can be effectively treated, and the interactive experience is improved.
In one possible implementation manner, the video picture sharing function of the embodiment of the present disclosure may be applied to a video picture sharing scene for a landscape, and specifically, when a viewer (equivalent to a first terminal) performs video watching on some landscape, the video picture sharing function may be turned on, for example, by clicking an AR sharing button in an application program, so that a photographed live-action video picture may be shared out; meanwhile, an AR disc (namely an AR object) is displayed beside the landscape in the real-scene video picture, other sharers (corresponding to a second terminal) which are sharing the video picture are contained on the AR disc, the positions of the viewers and the positions of the other sharers are displayed according to the positions relative to the landscape, and the real-scene video picture shot by the sharers can be switched and displayed in real time by selecting any sharer.
According to the embodiment of the disclosure, the AR disc can be generated around the live-action video picture containing the target object according to the position information of the sharer, a viewer can click to select any sharer, and the live-action video picture shot by the selected sharer can be watched in real time.
It will be appreciated that the above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure. It will be appreciated by those skilled in the art that in the above-described methods of the embodiments, the particular order of execution of the steps should be determined by their function and possible inherent logic.
In addition, the disclosure further provides a video frame sharing device, an electronic device, a computer readable storage medium, and a program, where the foregoing may be used to implement any video frame sharing method provided by the disclosure, and corresponding technical schemes and descriptions and corresponding descriptions referring to method parts are not repeated.
Fig. 4 shows a block diagram of a video picture sharing apparatus according to an embodiment of the present disclosure, which is applied to a first terminal, as shown in fig. 4, the apparatus including:
An information obtaining module 401, configured to obtain, from a cloud server, location information of at least one second terminal in response to an instruction to start a video frame sharing function, where the second terminal is a terminal that is acquiring and sharing a video image including a target object;
the display module 402 is configured to display an AR video frame including an AR object according to the location information of the at least one second terminal and the first video image acquired by the first terminal in real time, where the AR object includes a location identifier for characterizing a relative location between the at least one second terminal and the target object;
and the switching display module 403 is configured to obtain, from the cloud server, a target video image acquired by the second terminal corresponding to the selected location identifier in response to a selection operation for any one of the location identifiers of the AR objects, and switch and display a target live-action video picture corresponding to the target video image.
In a possible implementation manner, the displaying an AR video frame including an AR object according to the location information of the at least one second terminal and the first video image acquired by the first terminal in real time includes: generating the AR object according to the position information of the second terminal, and detecting an object area where the target object is located in a first live-action video picture corresponding to the first video image; and adding the AR objects around the target object in the first live-action video picture according to the object area where the target object is located, obtaining the AR video picture and displaying the AR video picture.
In a possible implementation manner, before the responding to the starting instruction of the video frame sharing function and obtaining the position information of at least one second terminal from the cloud server, the method further includes: the first video image acquisition module is used for responding to an opening instruction of a video picture sharing function and acquiring a first video image acquired by the first terminal in real time; the first video image transmission module is used for transmitting the first video image to the cloud server, so that the cloud server feeds back the position information of the at least one second terminal to the first terminal under the condition that the first live-action video picture corresponding to the first video image contains the target object.
In one possible implementation, the apparatus further includes: the position information determining module is used for determining the position information of the first terminal in real time according to the preset frequency and sending the position information of the first terminal to the cloud server so that the cloud server pushes the position information of the first terminal to the at least one second terminal.
In one possible implementation manner, after the switching display of the target live-action video frame corresponding to the target video image, the apparatus further includes: the sharing stop instruction receiving module is used for stopping displaying the target live-action video picture in response to receiving the sharing stop instruction sent by the cloud server, and obtaining the position information of at least one second terminal from the cloud server again to generate an AR video picture and display the AR video picture; the sharing stopping instruction is used for representing that the target object is not contained in the target live-action video picture corresponding to the target video image currently, or that the second terminal corresponding to the selected position identifier stops transmitting the target video image to the cloud server.
According to the embodiment of the disclosure, after a user starts a video picture sharing function, position information of a second terminal which is acquiring and sharing a video image containing a target object (such as a stage, a court, etc.) is acquired from a cloud server, then an AR video picture containing an AR object is displayed based on the position information and a first video image acquired in real time by a first terminal, and the AR object includes a position identifier for representing a relative position between at least one second terminal and the target object, so that the user can conveniently view the relative position of each second terminal relative to the target object in real time, then any one of the position identifiers is selected, which corresponds to the selected second terminal, and a target live-action video picture corresponding to the target video image is acquired from the cloud server, so that the target live-action video picture shot by the second terminal under a shooting view angle of interest of the user can be switched and displayed, and even if the actual position of the first terminal relative to the target object is far away, the viewing experience of viewing the target can be improved by viewing the live-action video picture shared by other second terminals with better shooting view angles
Fig. 5 shows a block diagram of a video picture sharing apparatus according to an embodiment of the present disclosure, which is applied to a cloud server, as shown in fig. 5, the apparatus includes:
a receiving module 501, configured to detect, in response to receiving a first video image transmitted by a first terminal that has started a video frame sharing function, whether a first live-action video frame corresponding to the first video image includes a target object;
the information sending module 502 is configured to send, when the first live-action video frame includes the target object, location information of at least one second terminal to the first terminal, so that the first terminal feeds back, according to the location information of the at least one second terminal, the selected second terminal to the cloud server, where the second terminal is a terminal that is acquiring and sharing a video image including the target object;
and the transmission module 503 is configured to transmit, to the first terminal, the target video image acquired by the selected second terminal in response to receiving the selected second terminal fed back by the first terminal.
In one possible implementation, the apparatus further includes: the receiving module is used for receiving the position information of the first terminal sent by the first terminal; the pushing module is used for pushing the position information of the first terminal to the at least one second terminal under the condition that the first live-action video picture contains the target object, so that the at least one second terminal updates the position identification on the respectively displayed AR object according to the position information of the first terminal.
In one possible implementation, the apparatus further includes: the detection module is used for receiving a second video image transmitted by the at least one second terminal in real time and the position information of the at least one second terminal, and respectively detecting whether a second live-action video picture corresponding to the second video image contains the target object or not, wherein the second video image comprises the target video image acquired by the selected second terminal; and the transmission stopping module is used for stopping transmitting the target video image to the first terminal and sending a sharing stopping instruction to the first terminal so as to instruct the first terminal to stop displaying the target live-action video picture corresponding to the target video image under the condition that the target object is not included in the target live-action video picture corresponding to the target video image currently or the selected second terminal stops transmitting the target video image.
In the embodiment of the disclosure, the cloud server pushes the position information of the other second terminals which are acquiring the video images containing the target object and sharing the video images to the first terminal which has any started the video image sharing function and is shooting the target object, so that a user holding the first terminal can select any interested second terminal based on the position information, the cloud server can transmit the target video image acquired by the second terminal selected by the user to the first terminal, and the first terminal can switch and display the target live-action video image corresponding to the target video image, thereby realizing video image sharing among the terminals, namely, any terminal can become a sharer sharing the live-action video image shot by the user and also become a viewer watching the live-action video image shot by other terminals, and being beneficial to improving the watching experience of watching the target object.
The method has specific technical association with the internal structure of the computer system, and can solve the technical problems of improving the hardware operation efficiency or the execution effect (including reducing the data storage amount, reducing the data transmission amount, improving the hardware processing speed and the like), thereby obtaining the technical effect of improving the internal performance of the computer system which accords with the natural law.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method. The computer readable storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to perform the above method.
Embodiments of the present disclosure also provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, performs the above method.
The electronic device may be provided as a terminal, cloud server, or other form of device.
Fig. 6 illustrates a block diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, electronic device 1900 may be provided as a cloud server or terminal device. Referring to FIG. 6, electronic device 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that can be executed by processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the methods described above.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. Electronic device 1900 may operate an operating system based on memory 1932, such as the Microsoft Server operating system (Windows Server) TM ) Apple Inc. developed graphical user interface based operating System (Mac OS X TM ) Multi-user multi-process computer operating system (Unix) TM ) Unix-like operating system (Linux) of free and open source code TM ) Unix-like operating system (FreeBSD) with open source code TM ) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of electronic device 1900 to perform the methods described above.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
If the technical scheme of the application relates to personal information, the product applying the technical scheme of the application clearly informs the personal information processing rule before processing the personal information and obtains the autonomous agreement of the individual. If the technical scheme of the application relates to sensitive personal information, the product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'explicit consent'. For example, a clear and remarkable mark is set at a personal information acquisition device such as a camera to inform that the personal information acquisition range is entered, personal information is acquired, and if the personal voluntarily enters the acquisition range, the personal information is considered as consent to be acquired; or on the device for processing the personal information, under the condition that obvious identification/information is utilized to inform the personal information processing rule, personal authorization is obtained by popup information or a person is requested to upload personal information and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing mode, and a type of personal information to be processed.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (12)

1. A video picture sharing method, applied to a first terminal, comprising:
responding to an opening instruction of a video picture sharing function, and acquiring position information of at least one second terminal from a cloud server, wherein the second terminal is a terminal which is used for acquiring video images containing target objects and sharing the video images;
displaying an AR video picture containing an AR object according to the position information of the at least one second terminal and the first video image acquired by the first terminal in real time, wherein the AR object comprises a position mark for representing the relative position between the at least one second terminal and the target object;
And responding to the selection operation of any position identifier in the AR object, acquiring a target video image acquired by a second terminal corresponding to the selected position identifier from the cloud server, and switching and displaying a target live-action video picture corresponding to the target video image.
2. The method according to claim 1, wherein displaying an AR video frame including an AR object based on the location information of the at least one second terminal and the first video image acquired by the first terminal in real time, comprises:
generating the AR object according to the position information of the second terminal, and detecting an object area where the target object is located in a first live-action video picture corresponding to the first video image;
and adding the AR objects around the target object in the first live-action video picture according to the object area where the target object is located, obtaining the AR video picture and displaying the AR video picture.
3. The method according to claim 1 or 2, wherein before the location information of the at least one second terminal is obtained from the cloud server in response to the start instruction of the video frame sharing function, the method further comprises:
Responding to an opening instruction of a video picture sharing function, and acquiring a first video image acquired by the first terminal in real time;
and transmitting the first video image to the cloud server, so that the cloud server feeds back the position information of the at least one second terminal to the first terminal under the condition that the first live-action video picture corresponding to the first video image contains the target object.
4. A method according to any one of claims 1 to 3, further comprising:
according to the preset frequency, the position information of the first terminal is determined in real time and sent to the cloud server, so that the cloud server pushes the position information of the first terminal to the at least one second terminal.
5. The method according to any one of claims 1 to 4, wherein after the switching display of the target live-action video picture corresponding to the target video image, the method further comprises:
in response to receiving a sharing stopping instruction sent by the cloud server, stopping displaying the target live-action video picture, and acquiring the position information of at least one second terminal from the cloud server again to generate an AR video picture and display the AR video picture;
The sharing stopping instruction is used for representing that the target object is not contained in the target live-action video picture corresponding to the target video image currently, or that the second terminal corresponding to the selected position identifier stops transmitting the target video image to the cloud server.
6. The video picture sharing method is characterized by being applied to a cloud server and comprising the following steps of:
in response to receiving a first video image transmitted by a first terminal with a video picture sharing function started, detecting whether a first live-action video picture corresponding to the first video image contains a target object or not;
transmitting the position information of at least one second terminal to the first terminal under the condition that the first live-action video picture contains the target object, so that the first terminal feeds back the selected second terminal to the cloud server according to the position information of the at least one second terminal, wherein the second terminal is a terminal which is used for collecting and sharing video images containing the target object;
and transmitting the target video image acquired by the selected second terminal to the first terminal in response to receiving the selected second terminal fed back by the first terminal.
7. The method of claim 6, wherein the method further comprises:
receiving the position information of the first terminal sent by the first terminal;
and pushing the position information of the first terminal to the at least one second terminal under the condition that the target object is contained in the first live-action video picture, so that the at least one second terminal updates the position identification on the respectively displayed AR object according to the position information of the first terminal.
8. The method according to claim 6 or 7, characterized in that the method further comprises:
receiving a second video image transmitted by the at least one second terminal in real time and position information of the at least one second terminal, and respectively detecting whether a second live-action video picture corresponding to the second video image contains the target object or not, wherein the second video image comprises a target video image acquired by the selected second terminal;
and stopping transmitting the target video image to the first terminal under the condition that the target object is not contained in the target live-action video picture corresponding to the target video image currently or the selected second terminal stops transmitting the target video image, and sending a sharing stopping instruction to the first terminal so as to instruct the first terminal to stop displaying the target live-action video picture corresponding to the target video image.
9. A video picture sharing apparatus, applied to a first terminal, comprising:
the information acquisition module is used for responding to an opening instruction of a video picture sharing function and acquiring position information of at least one second terminal from the cloud server, wherein the second terminal is a terminal which is used for acquiring video images containing target objects and sharing the video images;
the display module is used for displaying an AR video picture containing an AR object according to the position information of the at least one second terminal and the first video image acquired by the first terminal in real time, wherein the AR object comprises a position mark used for representing the relative position between the at least one second terminal and the target object;
and the switching display module is used for responding to the selection operation of any position identifier in the AR object, acquiring a target video image acquired by a second terminal corresponding to the selected position identifier from the cloud server, and switching and displaying a target live-action video picture corresponding to the target video image.
10. A video frame sharing device, applied to a cloud server, comprising:
the receiving module is used for responding to a first video image transmitted by a first terminal which has started a video picture sharing function, and detecting whether a first live-action video picture corresponding to the first video image contains a target object or not;
The information sending module is used for sending the position information of at least one second terminal to the first terminal under the condition that the first live-action video picture contains the target object, so that the first terminal feeds back the selected second terminal to the cloud server according to the position information of the at least one second terminal, and the second terminal is a terminal which is acquiring and sharing video images containing the target object;
and the transmission module is used for responding to the selected second terminal fed back by the first terminal and transmitting the target video image acquired by the selected second terminal to the first terminal.
11. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored in the memory to perform the method of any of claims 1 to 8.
12. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1 to 8.
CN202310468639.1A 2023-04-23 2023-04-23 Video picture sharing method and device, electronic equipment and storage medium Pending CN117097932A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310468639.1A CN117097932A (en) 2023-04-23 2023-04-23 Video picture sharing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310468639.1A CN117097932A (en) 2023-04-23 2023-04-23 Video picture sharing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117097932A true CN117097932A (en) 2023-11-21

Family

ID=88782000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310468639.1A Pending CN117097932A (en) 2023-04-23 2023-04-23 Video picture sharing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117097932A (en)

Similar Documents

Publication Publication Date Title
CN108260020B (en) Method and device for displaying interactive information in panoramic video
CN108985176B (en) Image generation method and device
US9485493B2 (en) Method and system for displaying multi-viewpoint images and non-transitory computer readable storage medium thereof
CN108154058B (en) Graphic code display and position area determination method and device
CN108632632B (en) Live webcast data processing method and device
US9392248B2 (en) Dynamic POV composite 3D video system
CN110928627B (en) Interface display method and device, electronic equipment and storage medium
CN112312111A (en) Virtual image display method and device, electronic equipment and storage medium
CN108924644B (en) Video clip extraction method and device
CN108635863B (en) Live webcast data processing method and device
CN108174269B (en) Visual audio playing method and device
CN110493637B (en) Video splitting method and device
CN111815779A (en) Object display method and device, positioning method and device and electronic equipment
CN108986117B (en) Video image segmentation method and device
CN112991553A (en) Information display method and device, electronic equipment and storage medium
CN106686463A (en) Video role replacing method and apparatus
CN112950712B (en) Positioning method and device, electronic equipment and storage medium
CN109389550B (en) Data processing method, device and computing equipment
CN112219224A (en) Image processing method and device, electronic equipment and storage medium
CN112860061A (en) Scene image display method and device, electronic equipment and storage medium
CN115379105A (en) Video shooting method and device, electronic equipment and storage medium
CN109040837B (en) Video processing method and device, electronic equipment and storage medium
CN108521579B (en) Bullet screen information display method and device
CN112432636B (en) Positioning method and device, electronic equipment and storage medium
CN107105311B (en) Live broadcasting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination