CN112929688B - Live video recording method, projector and live video system - Google Patents

Live video recording method, projector and live video system Download PDF

Info

Publication number
CN112929688B
CN112929688B CN202110176148.0A CN202110176148A CN112929688B CN 112929688 B CN112929688 B CN 112929688B CN 202110176148 A CN202110176148 A CN 202110176148A CN 112929688 B CN112929688 B CN 112929688B
Authority
CN
China
Prior art keywords
image
projection area
current
current time
projector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110176148.0A
Other languages
Chinese (zh)
Other versions
CN112929688A (en
Inventor
尹左水
王凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN202110176148.0A priority Critical patent/CN112929688B/en
Publication of CN112929688A publication Critical patent/CN112929688A/en
Application granted granted Critical
Publication of CN112929688B publication Critical patent/CN112929688B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4524Management of client data or end-user data involving the geographical location of the client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor

Abstract

The invention discloses a live video recording method, a projector and a live video system, which are characterized in that a current picture obtained by photographing a projection area by a camera when an object enters the projection area of the projector and a current depth of field distance obtained by scanning the projection area by a distance sensor are firstly obtained, in addition, the current time is also determined, an image of the object can be cut out from the current picture according to the current depth of field distance subsequently, the position of the object relative to a projected image in the current picture is determined, and finally, the current time, the image and the position of the object are sent to a server so that the server can superpose the image of the object to a video source based on the position of the image of the object and the current time. Compared with the prior art of uploading videos, the method and the device only need to upload the current time, the extracted image and the position of the object to the server, namely only upload the difference part with the video source to the server, occupy less network bandwidth and are convenient for video synthesis of the server.

Description

Live video recording method, projector and live video system
Technical Field
The invention relates to the technical field of live broadcast, in particular to a live broadcast video recording method, a projector and a live broadcast video system.
Background
With the development of the internet, more and more users learn through the web lessons. Specifically, the video source is broadcast to the projecting apparatus, and the teacher can enter into the projection region sometimes and explain in order to the content on the projection curtain, sees teacher's operation for the convenience of student among the prior art, can carry out the video through the camera and record to the video conveying who will record sends the server, so that the server synthesizes the video of recording and video source, and pushes away the stream to live client with the video after synthesizing. It can be seen that although the user can see the explanation of the teacher through this method, it needs to occupy a large bandwidth when transmitting the video recorded by the camera to the server, and has a high requirement on the network of the anchor end, and besides, since the video includes the same part as the video source and a different part, it is not convenient to synthesize.
Disclosure of Invention
The invention aims to provide a live video recording method, a projector and a live video system, which only need to upload the image and the position of the current time and the extracted object to a server, namely only upload the difference part of the current time and the extracted object to the server, occupy less network bandwidth and facilitate the video synthesis of the server.
In order to solve the technical problem, the invention provides a live video recording method, which comprises the following steps:
acquiring a current picture obtained by photographing a projection area by a camera when an object enters the projection area of a projector and a current depth of field obtained by scanning the projection area by a distance sensor, and determining current time;
cropping an image of the object from the current picture based on the current depth of field distance and determining a position of the object relative to a projected image in the current picture;
and sending the current time, the image and the position of the object to a server so that the server can overlay the image of the object to a video source based on the position and the current time of the image of the object.
Preferably, the method for acquiring a current picture obtained by photographing the projection area by the camera when an object enters the projection area of the projector and a current depth-of-field distance obtained by scanning the projection area by the distance sensor and determining the current time further includes:
acquiring the depth of field distance scanned by the distance sensor in the projection area of the projector;
judging whether an object enters the projection area or not based on the depth of field distance;
and if so, controlling the camera to photograph the projection area, and entering a step of acquiring a current picture obtained by photographing the projection area by the camera when an object enters the projection area of the projector and a current depth of field distance obtained by scanning the projection area by the distance sensor, and determining the current time.
Preferably, the determining whether an object enters the projection area based on the depth distance includes:
judging whether the depth of field distance is smaller than a reference distance, wherein the reference distance is not larger than the distance between the distance sensor and a projection curtain of the projector;
and if so, judging that an object enters the projection area.
Preferably, determining the current time comprises:
and when judging that an object enters the projection area based on the depth of field distance, determining a timestamp of a video frame currently played by the projector, and taking the time corresponding to the timestamp as the current time.
Preferably, before sending the current time, the image of the object and the position to a server, the method further includes:
encoding the current time, the image and the position of the object;
sending the current time, the image and the position of the object to a server, comprising:
and sending the encoded current time, the image and the position of the object to a server.
Preferably, determining the position of the object relative to the projection image in the current picture comprises:
determining the size ratio of a projection image in the current picture to the current picture;
determining a position of the object relative to a projected image in the current picture based on a position of an image of the object relative to the current picture and the size scale.
In order to solve the above technical problem, the present invention further provides a projector, including:
a memory for storing a computer program;
and the processor is used for realizing the steps of the live video recording method when the computer program is executed.
In order to solve the above technical problem, the present invention further provides a live video system, including the above projector, further including:
the camera is used for photographing the projection area to obtain a current picture;
and the distance sensor is used for scanning the projection area to obtain the current depth of field distance.
Preferably, the method further comprises the following steps:
and the server is used for superposing the image of the object to a video source based on the position and the current time of the image of the object and pushing the superposed video source to the user side.
Preferably, the server is further configured to, when current time, images of objects, and positions sent by a plurality of projectors for the video source are received at the same time, superimpose the images of the objects onto the video source based on the current time and the positions of the objects sent by the plurality of projectors, respectively, and push the superimposed video source to the user side.
The invention provides a live video recording method, which comprises the steps of firstly obtaining a current picture obtained by photographing a projection area by a camera when an object enters the projection area of a projector, and a current depth of field distance obtained by scanning the projection area by a distance sensor, in addition, determining the current time, subsequently cutting an image of the object from the current picture according to the current depth of field distance, determining the position of the object relative to a projected image in the current picture, and finally sending the current time, the image and the position of the object to a server so that the server superposes the image of the object to a video source based on the position of the image of the object and the current time. Compared with the prior art of uploading videos, the method and the device only need to upload the current time, the extracted image and the position of the object to the server, namely only upload the difference part with the video source to the server, occupy less network bandwidth and are convenient for video synthesis of the server.
The invention also provides a projector and a live video system, and the projector and the live video system have the same beneficial effects as the live video recording method.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed in the prior art and the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a process flow diagram of a live video recording method according to the present invention;
FIG. 2 is a schematic diagram of a projection area according to the present invention;
fig. 3 is a schematic structural diagram of a projector according to the present invention;
fig. 4 is a schematic structural diagram of a live video system according to the present invention.
Detailed Description
The core of the invention is to provide a live video recording method, a projector and a live video system, only the current time, the image and the position of the extracted object need to be uploaded to a server, namely only the difference part between the current time and the video source needs to be uploaded to the server, the occupation of network bandwidth is small, and the video synthesis of the server is convenient.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a process flow diagram of a live video recording method according to the present invention.
The method comprises the following steps:
s11: acquiring a current picture obtained by photographing a projection area by a camera when an object enters the projection area of the projector and a current depth of field obtained by scanning the projection area by a distance sensor, and determining current time;
s12: cutting out an image of the object from the current picture based on the current depth of field distance, and determining the position of the object relative to a projected image in the current picture;
s13: and sending the current time, the image and the position of the object to the server so that the server can overlay the image of the object to the video source based on the position of the image of the object and the current time.
In the prior art, the camera records the video in the projection area, and then the projector uploads the whole video to the server, on one hand, the camera occupies a large bandwidth, and the network requirement on the anchor end is high.
In order to solve the above technical problem, in the present application, when an object enters a projection area of a projector, a current picture obtained by taking a picture of the projection area by a camera and a current depth-of-field distance obtained by scanning the projection area by a distance sensor are obtained, where the object refers to a person and/or an object appearing in the projection area independently of a projection image, and the projection area refers to a rectangular pyramid formed by the projector and a projection curtain (the projection image is usually matched with the projection curtain in size), as shown in fig. 2, fig. 2 is a schematic diagram of a projection area provided by the present invention. The distance sensor can scan the projection area according to a preset path to obtain a depth of field distance, wherein the depth of field distance refers to the distance between the distance sensor and the projection curtain or an object, when no object enters the projection area, the depth of field distance obtained by the distance sensor is the distance between the distance sensor and the projection curtain, and when an object enters the projection area, the depth of field distance obtained by the distance sensor comprises the distance between the distance sensor and the object. In addition, the picture obtained by photographing the projection area by the camera at least comprises the projection area, and can also be slightly larger than the projection area, which is determined according to the actual situation. In addition, the current time needs to be determined when the current picture and the current depth distance are acquired, so that the image of the object can be superimposed on the proper position of the video source in the following process.
After the current depth of field distance and the current picture are obtained, because the distance between the distance sensor and the projection curtain and the object is different, the image of the object can be cut out from the current picture based on the current depth of field distance between the distance sensor and the object, and in addition, in order to ensure that the scene seen by the user end is consistent with the current actual scene, the position of the object relative to the projection image in the current picture also needs to be determined, so that a subsequent server can superpose the image of the object to the corresponding position of the corresponding image of the video source according to the position.
After the current time, the image of the object and the position relative to the projected image are obtained, the current time, the image of the object and the position relative to the projected image are sent to a server, the server determines a video frame with the time corresponding to the timestamp in the video source being equal to the current time, and the image of the object is superposed to the corresponding position of the determined video frame according to the position of the object relative to the projected image.
It should be noted that, in practical application, the server may push the video source to the projector for playing, and in addition, after the preset time, the server may also push the video source to the user side for playing, and then the scene played on the projector may be played at the user side after the preset time, so that the above-mentioned video stacking process may be completed within this time difference. The preset time here can be, but is not limited to, 1min, and the application is not limited to this, and is determined according to the actual situation.
It should be noted that, in practical applications, the camera and the distance sensor may be separately disposed, or the distance sensor may be integrated into the camera to obtain the depth-of-field camera, and in addition, the camera and the distance sensor may also be disposed on the projector, which is not limited herein.
Therefore, compared with the uploading of videos in the prior art, the method and the device only need to upload the current time, the extracted image and the position of the object to the server, namely only upload the difference part with the video source to the server, occupy less network bandwidth, and are convenient for video synthesis of the server.
On the basis of the above-described embodiment:
as a preferred embodiment, before obtaining a current picture obtained by taking a picture of a projection area by a camera when an object enters the projection area of the projector and a current depth of field obtained by scanning the projection area by a distance sensor and determining a current time, the method further includes:
acquiring the depth of field distance scanned in the projection area of the projector by the distance sensor;
judging whether an object enters the projection area or not based on the depth of field distance;
and if so, controlling the camera to photograph the projection area, and entering a step of acquiring a current picture obtained by photographing the projection area by the camera when an object enters the projection area of the projector and a current depth of field distance obtained by scanning the projection area by the distance sensor, and determining the current time.
Specifically, the method includes the steps of firstly obtaining the depth of field distance scanned by a distance sensor in a projection area of a projector, considering that when an object enters the projection area, the distance between the distance sensor and the object is different from the distance between the distance sensor and a projection screen, judging whether the object enters the projection area or not based on the depth of field distance, if so, controlling a camera to shoot the projection area, and obtaining a current picture obtained by shooting the projection area by the camera when the object enters the projection area of the projector and a current depth of field distance scanned by the distance sensor in the projection area, and determining the current time.
Therefore, the embodiment can automatically judge whether an object enters the projection area according to the depth of field distance, and only controls the camera to shoot when the object enters, so that the power consumption of the camera is reduced.
As a preferred embodiment, the determining whether an object enters the projection area based on the depth distance includes:
judging whether the depth of field distance is smaller than a reference distance, wherein the reference distance is not larger than the distance between the distance sensor and a projection curtain of the projector;
and if so, judging that the object enters the projection area.
Specifically, the depth of field distance in the present application refers to a distance between the distance sensor and the projection screen or the object, and when no object enters the projection area, the depth of field distance obtained by the distance sensor is a distance between the distance sensor and the projection screen, and when an object enters the projection area, the depth of field distance obtained by the distance sensor includes a distance between the distance sensor and the object.
Considering that the distance between the distance sensor and the object is smaller than the distance between the distance sensor and the projection curtain, in this embodiment, the distance sensor is controlled to scan the projection area according to a preset path, the depth-of-field distance obtained by scanning is transmitted to the processor in the scanning process, and after the depth-of-field distance transmitted by the distance sensor is obtained by the processor, whether the depth-of-field distance is smaller than the reference distance or not can be judged, and if yes, it is determined that the object enters the projection area. And if all the depth-of-field distances are not smaller than the reference distance after the projection area is scanned according to the preset path, judging that no object enters the projection area at the moment. The reference distance is not greater than the distance from the distance sensor to the projection screen of the projector, and the reference distance is not particularly limited in the present application and is determined according to the actual situation.
Therefore, whether an object enters the projection area or not can be simply and reliably judged by the method.
As a preferred embodiment, determining the current time comprises:
when the object enters the projection area based on the depth of field distance, the timestamp of the video frame currently played by the projector is determined, and the time corresponding to the timestamp is used as the current time.
Specifically, when it is determined that an object enters the projection area based on the depth-of-field distance, the camera is controlled to photograph the projection area, and the current time is recorded, so that a timestamp of a video frame currently played by the video source can be specifically acquired, and the time corresponding to the timestamp is used as the current time, so that a subsequent server can superimpose an image of the object on the video frame, and push the superimposed and combined video source to the user side.
Therefore, the time that the object enters the projection area can be accurately recorded through the method, the accuracy of the image superposition of the subsequent object to the video source is guaranteed, the live scene and the real scene seen by the user at the user side are basically consistent, and the user experience is improved.
Of course, the current time may be obtained in other ways, and the application is not limited in this respect.
As a preferred embodiment, before sending the current time, the image of the object and the position to the server, the method further includes:
encoding the current time, the image and the position of the object;
sending the current time, the image of the object and the position to a server, comprising:
and sending the coded current time, the image and the position of the object to a server.
Specifically, in order to further reduce the capacity of the data uploaded to the server, reduce the occupation of the bandwidth, and improve the security of data transmission, in this embodiment, after the current time, the image and the position of the object are obtained, the current time, the image and the position of the object are further encoded, and then the encoded current time, the image and the position of the object are sent to the server.
Therefore, the capacity of the data uploaded to the server can be further reduced, the occupation of bandwidth is reduced, and the safety of data transmission is improved.
As a preferred embodiment, determining the position of the object relative to the projected image in the current picture comprises:
determining the size ratio of a projected image in a current picture to the current picture;
the position of the object relative to the projected image in the current picture is determined based on the position and size scale of the image of the object relative to the current picture.
In order to enable a live view scene seen by a user at a user side to be basically consistent with a real scene, in the present application, a size ratio of a projected image in the current picture to the current picture is determined, and in the obtained current picture, a position of an image of an object relative to the projected image in the current picture is determined based on the position of the image of the object relative to the current picture and the size ratio.
Specifically, in practical applications, after the camera is initialized, the position of the projection area or the projection screen may be captured in an image recognition manner, the projection area may also be determined by the distance from the projector to the projection screen and the projection angle of the projector, and then preset coordinates may be given to the projection area or four corners of the projection screen, so that the position of the object relative to the projection image in the current picture may be reflected by the coordinates in the following process.
Therefore, the live scene seen by the user at the user side is basically consistent with the real scene through the method, and the user experience is improved.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a projector according to the present invention. The projector includes:
a memory 31 for storing a computer program;
a processor 32 for implementing the steps of the live video recording method as described above when executing the computer program.
For the introduction of the projector provided by the present invention, please refer to the above method embodiment, and the present invention is not repeated herein.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a live video system according to the present invention.
The live video system includes the projector 41 as described above, and further includes:
the camera 42 is used for photographing the projection area to obtain a current picture;
and the distance sensor 43 is used for scanning the projection area to obtain the current depth of field distance.
As a preferred embodiment, further comprising:
and the server is used for superposing the image of the object to the video source based on the position of the image of the object and the current time and pushing the superposed video source to the user side.
As a preferred embodiment, the server is further configured to, when the current time, the image of the object, and the position of the object sent by the multiple projectors for the video source are received simultaneously, superimpose the image of each object onto the video source based on the current time and the position of the object sent by the multiple projectors, respectively, and push the superimposed video source to the user side.
Specifically, in some cases, a plurality of anchor broadcasters simultaneously playing the same video source need to be superimposed and combined to the video source, so that the user end can see an interactive scene of the anchor broadcasters under the same video source, for example, when the anchor a and the anchor B play chess, and when the anchor a is in a projection area, the projector at the anchor a can send the image, the position, and the current time of the anchor a obtained by cutting to the server. When the anchor B is in the projection area, the projector at the anchor B can send the cut image, position and current time of the anchor B to the server, the server superposes the image of the anchor A on the video source based on the current time sent by the projector at the anchor A and the position of the image of the anchor A, and in addition, the image of the anchor B is superposed on the video source based on the current time sent by the projector at the anchor B and the position of the image of the anchor V, so that the user side can see the anchor A and the anchor B simultaneously in one video scene.
Therefore, through the mode, the user can see the interaction of the multiple anchor broadcasts in the live broadcast, the existence of multiple parties can be felt realistically, and the user experience is improved.
For other descriptions of a live video system provided by the present invention, please refer to the above method embodiment, and the present invention is not described herein again.
It should be noted that, in the present specification, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A live video recording method is characterized by comprising the following steps:
the method comprises the steps of obtaining a current picture obtained by photographing a projection area by a camera when an object enters the projection area of a projector, obtaining a current depth of field obtained by scanning the projection area by a distance sensor, and determining the current time, wherein the projector is used for playing when a video source is received;
cropping an image of the object from the current picture based on the current depth of field distance and determining a position of the object relative to a projected image in the current picture;
sending the current time, the image and the position of the object to a server so that the server can overlay the image of the object to the video source based on the position and the current time of the image of the object;
the method comprises the steps of obtaining a current picture obtained by photographing a projection area by a camera when an object enters the projection area of a projector, obtaining a current depth of field obtained by scanning the projection area by a distance sensor, and determining the current time, and further comprising the following steps:
acquiring the depth of field distance scanned in the projection area of the projector by the distance sensor;
judging whether an object enters the projection area or not based on the depth of field distance;
if so, controlling the camera to photograph the projection area, and entering a step of acquiring a current picture obtained by photographing the projection area by the camera when an object enters the projection area of the projector, and a current depth of field distance obtained by scanning the projection area by the distance sensor, and determining current time;
judging whether an object enters the projection area or not based on the depth of field distance, comprising the following steps:
judging whether the depth of field distance is smaller than a reference distance, wherein the reference distance is not larger than the distance between the distance sensor and a projection curtain of the projector;
and if so, judging that an object enters the projection area.
2. The live video recording method of claim 1, wherein determining the current time comprises:
and when judging that an object enters the projection area based on the depth of field distance, determining a timestamp of a video frame currently played by the projector, and taking the time corresponding to the timestamp as the current time.
3. The live video recording method of claim 1, wherein before sending the current time, the image of the object, and the location to a server, further comprising:
encoding the current time, the image and the position of the object;
sending the current time, the image of the object and the position to a server, including:
and sending the encoded current time, the image and the position of the object to a server.
4. A method for live video recording as claimed in any one of claims 1 to 3 wherein determining the position of the object relative to the projected image in the current picture comprises:
determining the size ratio of a projection image in the current picture to the current picture;
determining a position of the object relative to a projected image in the current picture based on a position of an image of the object relative to the current picture and the size scale.
5. A projector, characterized by comprising:
a memory for storing a computer program;
a processor for implementing the steps of the live video recording method of any one of claims 1 to 4 when executing the computer program.
6. A live video system comprising the projector of claim 5, and further comprising:
the camera is used for photographing the projection area to obtain a current picture;
and the distance sensor is used for scanning the projection area to obtain the current depth of field distance.
7. A live video system as defined in claim 6, further comprising:
and the server is used for superposing the image of the object to a video source based on the position and the current time of the image of the object and pushing the superposed video source to the user side.
8. The live video system of claim 7, wherein the server is further configured to, when receiving the current time, the image of the object, and the position sent by the plurality of projectors for the video source at the same time, superimpose the image of each object on the video source based on the current time and the position of the object sent by the plurality of projectors, respectively, and push the superimposed video source to the user side.
CN202110176148.0A 2021-02-09 2021-02-09 Live video recording method, projector and live video system Active CN112929688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110176148.0A CN112929688B (en) 2021-02-09 2021-02-09 Live video recording method, projector and live video system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110176148.0A CN112929688B (en) 2021-02-09 2021-02-09 Live video recording method, projector and live video system

Publications (2)

Publication Number Publication Date
CN112929688A CN112929688A (en) 2021-06-08
CN112929688B true CN112929688B (en) 2023-01-24

Family

ID=76171320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110176148.0A Active CN112929688B (en) 2021-02-09 2021-02-09 Live video recording method, projector and live video system

Country Status (1)

Country Link
CN (1) CN112929688B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1503925A (en) * 2001-02-16 2004-06-09 伊马特公司 Interactive teleconferencing display system
CN105959595A (en) * 2016-05-27 2016-09-21 西安宏源视讯设备有限责任公司 Virtuality to reality autonomous response method for virtuality and reality real-time interaction
CN107743270A (en) * 2017-10-31 2018-02-27 上海掌门科技有限公司 Exchange method and equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8508550B1 (en) * 2008-06-10 2013-08-13 Pixar Selective rendering of objects
WO2013035308A2 (en) * 2011-09-05 2013-03-14 Panasonic Corporation Television communication system, terminal, and method
CN106303694A (en) * 2015-06-25 2017-01-04 上海峙森网络科技有限公司 A kind of method prepared by multimedia slide
CN105100646B (en) * 2015-08-31 2018-09-11 北京奇艺世纪科技有限公司 Method for processing video frequency and device
CN106572385A (en) * 2015-10-10 2017-04-19 北京佳讯飞鸿电气股份有限公司 Image overlaying method for remote training video presentation
CN109327658A (en) * 2018-10-09 2019-02-12 西安黑瞳信息科技有限公司 A kind of user's face snap camera unit and application method based on high-speed computation
CN109194874A (en) * 2018-10-30 2019-01-11 努比亚技术有限公司 Photographic method, device, terminal and computer readable storage medium
CN111242962A (en) * 2020-01-15 2020-06-05 中国平安人寿保险股份有限公司 Method, device and equipment for generating remote training video and storage medium
CN111654715B (en) * 2020-06-08 2024-01-09 腾讯科技(深圳)有限公司 Live video processing method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1503925A (en) * 2001-02-16 2004-06-09 伊马特公司 Interactive teleconferencing display system
CN105959595A (en) * 2016-05-27 2016-09-21 西安宏源视讯设备有限责任公司 Virtuality to reality autonomous response method for virtuality and reality real-time interaction
CN107743270A (en) * 2017-10-31 2018-02-27 上海掌门科技有限公司 Exchange method and equipment

Also Published As

Publication number Publication date
CN112929688A (en) 2021-06-08

Similar Documents

Publication Publication Date Title
EP2619761B1 (en) Enriching digital photographs
US9774896B2 (en) Network synchronized camera settings
JP6432029B2 (en) Method and system for producing television programs at low cost
US7224851B2 (en) Method and apparatus for registering modification pattern of transmission image and method and apparatus for reproducing the same
CN105794202B (en) Depth for video and line holographic projections is bonded to
KR101678994B1 (en) Interactive Media Wall System and Method for Displaying 3Dimentional Objects
CN108418832A (en) A kind of virtual reality shopping guide method, system and storage medium
CN111193937A (en) Processing method, device, equipment and medium for live video data
CN112004046A (en) Image processing method and device based on video conference
KR100901111B1 (en) Live-Image Providing System Using Contents of 3D Virtual Space
US20180082716A1 (en) Auto-directing media construction
CN110730340A (en) Lens transformation-based virtual auditorium display method, system and storage medium
CN112929688B (en) Live video recording method, projector and live video system
GR1004309B (en) System and method of multi-camera recording of images and simultaneous transmission thereof to a television or cinema system
US11825191B2 (en) Method for assisting the acquisition of media content at a scene
CN116962744A (en) Live webcast link interaction method, device and live broadcast system
CN113315885B (en) Holographic studio and system for remote interaction
JP2008301399A (en) Television conference apparatus, television conference method, television conference system, computer program and recording medium
EP4033755A1 (en) System for broadcasting volumetric videoconferences in 3d animated virtual environment with audio information, and method for operating said system
WO2022075073A1 (en) Image capture device, server device, and 3d data generation method
CN115086696B (en) Video playing control method and device, electronic equipment and storage medium
KR102637147B1 (en) Vertical mode streaming method, and portable vertical mode streaming system
TWI822158B (en) System and method for immersive capture of streaming video and imaging
JP5004680B2 (en) Image processing apparatus, image processing method, video conference system, video conference method, program, and recording medium
WO2008069474A1 (en) Personal-oriented multimedia studio platform apparatus and method for authorizing 3d content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant