CN115277650B - Screen-throwing display control method, electronic equipment and related device - Google Patents

Screen-throwing display control method, electronic equipment and related device Download PDF

Info

Publication number
CN115277650B
CN115277650B CN202210819678.7A CN202210819678A CN115277650B CN 115277650 B CN115277650 B CN 115277650B CN 202210819678 A CN202210819678 A CN 202210819678A CN 115277650 B CN115277650 B CN 115277650B
Authority
CN
China
Prior art keywords
target
key points
video content
area
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210819678.7A
Other languages
Chinese (zh)
Other versions
CN115277650A (en
Inventor
曹佳新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Happycast Technology Co Ltd
Original Assignee
Shenzhen Happycast Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Happycast Technology Co Ltd filed Critical Shenzhen Happycast Technology Co Ltd
Priority to CN202210819678.7A priority Critical patent/CN115277650B/en
Publication of CN115277650A publication Critical patent/CN115277650A/en
Application granted granted Critical
Publication of CN115277650B publication Critical patent/CN115277650B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • H04L65/4038Arrangements for multi-party communication, e.g. for conferences with floor control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations

Abstract

The application relates to the technical field of computers and Internet, in particular to a screen display control method, electronic equipment and a related device, which are applied to the electronic equipment, wherein the electronic equipment realizes screen display through a large screen, the large screen comprises a first area and a second area, and the method comprises the following steps: displaying video content of a target object in the first area; displaying the text content of the target object in the second area; determining N first key points of the video content, wherein N is an integer greater than 1; configuring N second key points of the text content according to the N first key points, wherein the N first key points are in one-to-one correspondence with the N second key points; and synchronously playing the video content and the text content according to the N first key points and the N second key points. By adopting the embodiment of the application, the video content and the text content can be played simultaneously, the conference effect is improved, and the user experience is also improved.

Description

Screen-throwing display control method, electronic equipment and related device
Technical Field
The application relates to the technical field of computers and the technical field of the Internet, in particular to a screen display control method, electronic equipment and related devices.
Background
With the rapid development of internet technology, web conference is also an important means of communication, and the web conference system is a multimedia conference platform using a network as a medium, so that users can break through the limitation of time and region and realize the face-to-face communication effect through the internet. However, the current conference can only play a single content, which limits the conference effect to a certain extent.
Disclosure of Invention
The embodiment of the application provides a screen-throwing display control method, electronic equipment and related devices, which can simultaneously play video content and text content, thereby improving conference effect and user experience.
In a first aspect, an embodiment of the present application provides a method for controlling a screen display, which is applied to an electronic device, where the electronic device implements a screen display through a large screen, where the large screen includes a first area and a second area, and the method includes:
displaying video content of a target object in the first area;
displaying the text content of the target object in the second area;
determining N first key points of the video content, wherein N is an integer greater than 1;
configuring N second key points of the text content according to the N first key points, wherein the N first key points are in one-to-one correspondence with the N second key points;
And synchronously playing the video content and the text content according to the N first key points and the N second key points.
In a second aspect, an embodiment of the present application provides a screen display control device, which is applied to an electronic device, where the electronic device implements a screen display through a large screen, where the large screen includes a first area and a second area, and the device includes: a display unit, a determination unit and a play unit, wherein,
the display unit is used for displaying video content of a target object in the first area; and displaying text content of the target object in the second area;
the determining unit is used for determining N first key points of the video content, wherein N is an integer greater than 1; configuring N second key points of the text content according to the N first key points, wherein the N first key points are in one-to-one correspondence with the N second key points;
and the playing unit is used for realizing synchronous playing of the video content and the text content according to the N first key points and the N second key points.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program causes a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
By implementing the embodiment of the application, the following beneficial effects are achieved:
it can be seen that, in the embodiment of the present application, the screen-projection display control method, the electronic device, and the related device are applied to the electronic device, where the electronic device implements screen-projection display through a large screen, the large screen includes a first area and a second area, video content of a target object is displayed in the first area, text content of the target object is displayed in the second area, N first key points of the video content are determined, N is an integer greater than 1, N second key points of the text content are configured according to the N first key points, the N first key points are in one-to-one correspondence with the N second key points, synchronous play of the video content and the text content is implemented according to the N first key points and the N second key points, and the video content and the text content can be respectively displayed in 2 areas under one large screen, and synchronous play between the two is implemented through the key points, so that the video content and the text content can be simultaneously played, thereby improving conference effects and user experience.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a screen projection display control method provided in an embodiment of the present application;
fig. 2 is a flow chart of another method for controlling a screen display according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a functional unit composition block diagram of a projection display control device according to an embodiment of the present application.
Detailed Description
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to the list of steps or elements but may include, in one possible example, other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order to make the present application solution better understood by those skilled in the art, the following description will clearly and completely describe the technical solution in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The electronic device according to the embodiment of the application may include: the server may be a cloud or edge server.
The local devices according to embodiments of the present application may include, but are not limited to: smart phones, tablet computers, smart robots, smart projectors, conferencing devices, in-vehicle devices, wearable devices, computing devices, or other processing devices connected to wireless modems, as well as various forms of User Equipment (UE), mobile Stations (MS), terminal devices (terminal devices), etc., without limitation.
In this embodiment of the present application, the first local device, the second local device, and the third local device include the above local devices.
In this embodiment of the present application, a cloud conference, that is, a web conference, refers to a conference group created by a conference creator at a cloud end through a first local device, and the cloud end (simply understood that a cloud server side is provided with a dedicated cloud application, which can support creating the conference group at the cloud server side and providing conference services).
In the embodiment of the application, the presenter refers to a meeting participant who obtains the control authority of the conference desktop of the cloud conference through the second local equipment, and the original function of the product is that the conference desktop is controlled by a single user at the same time. The presenter may also be a master user.
In this embodiment of the present application, the shared content refers to content information uploaded to a cloud space of the cloud conference by a meeting participant of the cloud conference through a third local device, such as a file (e.g., various file office files, CAD drawing files, audio files, video files), a document (e.g., PPT), a split-screen image of a local device of a user, a screen recording content, and so on. The shared screen-cast display can be controlled to be the final screen-recorded file.
In this embodiment of the present application, at least one shared content of a cloud conference is displayed on a conference desktop (such as a master cloud desktop), where a single presenter can input explanation information for one or more shared content, and a single shared content can be input by one or more presenter.
Referring to fig. 1, fig. 1 is a flow chart of a screen display control method provided in an embodiment of the present application, as shown in the drawing, applied to an electronic device, where the electronic device implements screen display through a large screen, and the large screen includes a first area and a second area, and the screen display control method includes:
101. and displaying the video content of the target object in the first area.
In a specific implementation, the target object may be a product, an object, a task, a person, or the like, which is not limited herein. The large screen may include at least one of: electronic whiteboards, screens, curtains, virtual screens, etc., are not limited herein, and for example, the large screen may be a screen of a participant, and for example, the large screen may be a screen of a presenter.
In this embodiment of the present application, the electronic device may implement a screen display function through a large screen, where the large screen may include a first area and a second area. The first region may be used to display video content and the second region may be used to display corresponding text content. For example, video content of the target object may be displayed in the first region.
102. And displaying the text content of the target object in the second area.
In a specific implementation, a correspondence may be provided between the video content and the text content of the target object, for example, the video content may be a product introduction in the form of a video of the product a, and the text content may be a product introduction in the form of a text content of the product a. The text content may include at least one of: PPT, word document, PDF document, etc., and are not limited herein.
103. N first key points of the video content are determined, wherein N is an integer greater than 1.
In a specific implementation, N first key points of the video content may be determined, where N is an integer greater than 1, and the N first key points may be marked by themselves or default in the system, or the video content may be marked at intervals of a preset time interval, so as to obtain N first key points, where the preset time interval may be preset or default in the system.
104. And configuring N second key points of the text content according to the N first key points, wherein the N first key points are in one-to-one correspondence with the N second key points.
In a specific implementation, since the N key points may correspond to the time node, or may also include some keywords of the video frame at the corresponding position, N second key points in the text content may be determined based on the time node or the keywords, where the N first key points and the N second key points are in one-to-one correspondence.
105. And synchronously playing the video content and the text content according to the N first key points and the N second key points.
In the specific implementation, because the first key points and the second key points have a one-to-one correspondence, and further, the synchronous playing effect between the video content and the text content is determined based on the mapping relationship, that is, the synchronous playing of the video content and the text content can be realized according to the N first key points and the N second key points.
Optionally, the method further comprises the following steps:
a1, acquiring target voice information;
a2, extracting keywords from the target voice information to obtain target keywords;
a3, positioning the text content according to the target keyword to obtain a target second key point, wherein the target key point is one of the N second key points;
a4, determining a target first key point corresponding to the target second key point;
a5, positioning the video content according to the first key point of the target to obtain a positioning position;
and A6, jumping to the positioning position to play the video content.
In a specific implementation, the target voice information may be voice information of a presenter, and by way of example, the presenter may introduce a product and record a sound at the same time.
Specifically, in the embodiment of the present application, target voice information may be obtained, then keyword extraction may be performed on the target voice information to obtain a target keyword, text content may be located according to the target keyword to obtain a target second key point, the target key point is one of N second key points, then, based on a mapping relationship between the first key point and the second key point, a target first key point corresponding to the target second key point is determined, video content is located according to the target first key point to obtain a location position, and video content is played in the location position, so that text content may be located based on voice information, and then, the location of corresponding video content may be skipped, further, synchronous skip and synchronous play between text content and video content may be ensured, which is beneficial to improving user experience.
In the embodiment of the application, the content positioning is realized by voice recognition, for example, a voice signal is received, the keyword is extracted, the control instruction corresponding to the keyword is obtained, and the control instruction is positioned to the place where the content needs to be displayed, so that the cloud conference efficiency is greatly improved.
Optionally, the method further comprises the following steps:
B1, acquiring a selection instruction of the video content;
b2, responding to the selection instruction to obtain a target video frame, and obtaining a reference first key point corresponding to the target video frame;
b3, determining a reference second key point according to the reference first key point;
and B4, jumping the text content to a corresponding position according to the reference second key point, and displaying the text content at the corresponding position.
In a specific implementation, a host can select video content, namely, a selection instruction of the video content can be obtained, a target video frame is obtained in response to the selection instruction, a reference first key point corresponding to the target video frame is obtained, then, based on a mapping relation between the first key point and a second key point, the reference second key point can be determined according to the reference first key point, finally, text content can be jumped to a corresponding position according to the reference second key point, the text content of the corresponding position is displayed, synchronous jump and synchronous play between the text content and the video content can be ensured, and the user experience is facilitated to be improved.
Optionally, the step B1 of obtaining a selection instruction for the video content may include the following steps:
B11, displaying N first key points;
b12, selecting a first key point i, wherein the first key point i is one first key point in the N first key points i;
and B13, generating the selection instruction, wherein the selection instruction is used for selecting the video content corresponding to the first key point i.
In a specific implementation, N first keypoints may be displayed, for example, each keypoint may correspond to a keypoint identifier, and further, the keypoint identifiers of the N first keypoints may be displayed, a first keypoint i is selected, and the first keypoint i is one first keypoint of the N first keypoints i, that is, one keypoint identifier may be selected as the first keypoint i, and a selection instruction is generated and is used for selecting video content corresponding to the first keypoint i, so that rapid positioning of video may be achieved.
Of course, each first key point may also correspond to a keyword, and the selection of the first key point i is achieved by selecting a keyword.
For example, in the embodiment of the present application, the presentation timing of the video content and the text content may be timing synchronized, that is, a presentation progress synchronization mechanism may be configured in advance, for example, by using a static file presentation progress as a reference, associating a video or a key node in an operation state, or a mechanism based on AI only prediction, etc. Specifically, for example, assuming that a first area in a large screen is a content page of a PPT document of a target product uploaded by a presenter, a second area is a recorded video of the target product uploaded by the presenter, the PPT has 20 pages, and the recorded video of the target product can be correspondingly divided into 20 key nodes according to the 20 page presentation sequence, when the presenter controls to turn pages to a 5 th page, a cloud space should locate a fifth key node corresponding to the recorded video according to a preset corresponding relation of the page number query, and when the presenter explains the 5 th page content, the following centralized interaction mode is supported to synchronize the corresponding video content:
(1) The playing progress of the recorded video in the second area is automatically positioned to a fifth key node, and a speaker can click a playing button and play a corresponding part of video;
(2) After the host turns pages 5, the video in the second area plays corresponding video content according to the positioned fifth key node, and after the video content is finished, the host further explains the video;
(3) The speaker turns pages to page 5 and starts explanation, the cloud space collects and intelligently analyzes the voice information of the speaker, and when a video voice instruction for playing demonstration is detected, the playing is automatically controlled.
Optionally, the method further comprises the following steps:
c1, acquiring target attribute parameters of a display screen of a user side;
c2, determining a first size parameter of a first area of the large screen according to the target attribute parameter;
and C3, determining a second size parameter of a second area of the large screen according to the first size parameter and the target attribute parameter.
In this embodiment of the present application, the target attribute parameter may include at least one of the following: the size, model, material, etc. are not limited herein. In a specific implementation, a mapping relationship between a preset attribute parameter and a first size parameter may be stored in advance, and further, the first size parameter of the first area of the large screen corresponding to the target attribute parameter may be determined based on the mapping relationship.
In a specific implementation, a target attribute parameter of a display screen of a user terminal can be obtained, then a first size parameter of a first area of a large screen is determined according to the target attribute parameter, and finally a second size parameter of a second area of the large screen is determined according to the first size parameter and the target attribute parameter, namely 2 areas, namely the first area and the second area, need to be divided into the large screen, different contents are displayed in two areas, one area displays video content, and the other area displays text content.
For example, the large screen may be first adapted to a display area based on a mobile phone screen-throwing picture proportion, the remaining display area is adapted to display a screen-throwing PPT, and other client devices may select one of them to perform screen-throwing display, or dynamically perform automatic plug-flow adaptation according to a picture explained by a user.
Optionally, the method further comprises the following steps:
d1, receiving a full-screen display instruction of the first area;
d2, maximizing the first area and hiding the second area;
d3, displaying the video content of the target object through the first area.
In a specific implementation, a full-screen display instruction for the first area may also be received, the first area is maximized, and the second area is subjected to hiding, where the hiding may include at least one of: minimizing processing the second region, positioning the lower layer of the first region, closing the second region, etc., is not limited herein. Further, the video content of the target object can be displayed through the first area, and thus, full-screen display of the video content can be achieved.
Of course, the second area may also implement full-screen display, and its principle is similar to that of the first area, and will not be described herein.
Optionally, the step D3 of displaying the video content of the target object through the first area may include the following steps:
d31, obtaining target environment parameters;
d32, determining a target optimization factor corresponding to the target environment parameter;
d33, acquiring a first screen display parameter;
d34, optimizing the first screen display parameters according to the target optimization factors to obtain target first screen display parameters;
and D35, displaying the video content of the target object through the first area according to the first screen projection display parameters of the target.
In an embodiment of the present application, the target environmental parameter may include at least one of the following: ambient brightness, distance, user vision parameters, user angle with respect to the electronic whiteboard, etc., are not limited herein. In specific implementation, a mapping relation between preset environmental parameters and an optimization factor can be stored in advance, and the value range of the optimization factor can be-01-0.1. The first screen display parameter may be a default screen display parameter, and the screen display parameter may include at least one of: resolution, font size, frame rate, display color, sharpness, etc., are not limited herein.
In specific implementation, a target environment parameter can be obtained, then a target optimization factor corresponding to the target environment parameter is determined according to a mapping relation between the preset environment parameter and the optimization factor, a first screen display parameter is obtained, optimization processing is performed on the first screen display parameter according to the target optimization factor, and a target first screen display parameter is obtained, wherein the target first screen display parameter= (1+target optimization factor) is the first screen display parameter. Namely, the display based on the color differentiation standard can be performed, and the influence of the angle, the ambient light and the distance of the user can be considered; and fine adjustment is performed on the displayed color parameters based on the angle, the ambient light and the distance so as to improve the screen-throwing display effect and the user experience.
In the specific implementation, the screen throwing parameters can be adjusted based on the position, the distance and the angle of the user, so that the screen throwing effect meets the requirements of the user, and in addition, the screen throwing parameters can be dynamically adjusted by combining the vision parameters of the user.
Optionally, the method further comprises the following steps:
e1, acquiring a first playing parameter of the video content and a second playing parameter of the text content;
e2, determining a target speech rate of the speaker;
E3, determining a target regulation parameter corresponding to the target language speed;
e4, adjusting the first playing parameters according to the target adjusting parameters to obtain target first playing parameters;
e5, adjusting the second playing parameters according to the target first playing parameters to obtain target second playing parameters;
and E6, playing the video content according to the first playing parameter of the target, and playing the text content according to the second playing parameter of the target.
In a specific implementation, a first playing parameter of the video content and a second playing parameter of the text content can be obtained, the first playing parameter corresponds to the second playing parameter, and synchronous playing can be realized between the first playing parameter and the second playing parameter. The first play parameter may include at least one of: frame rate, resolution, font size, sharpness, etc., are not limited herein. The second play parameter may include at least one of: playback rate, resolution, font size, sharpness, etc., are not limited herein.
Furthermore, the target speech speed of the speaker can be determined, the mapping relation between the preset voice and the adjusting parameter can be stored in advance, the target adjusting parameter corresponding to the target speech speed can be determined based on the mapping relation, the first playing parameter is adjusted according to the target adjusting parameter to obtain the target first playing parameter, the second playing parameter is adjusted according to the target first playing parameter to obtain the target second playing parameter, the playing synchronism between the target first playing parameter and the target second playing parameter can be ensured, the video content can be played according to the target first playing parameter, the text content can be played according to the target second playing parameter, and therefore synchronous playing between the video content and the text content can be achieved.
Of course, automatic jump can be realized based on the speech speed and the keyword position of the user during playing.
Optionally, the user can jump from the first area to the second area, that is, through the full screen display of the second area, in specific implementation, jump is performed based on the authority of the user to the second area, before jump, the identity of the user needs to be verified, when the identity verification passes, the jump is performed directly, otherwise, the jump is not performed, for example, the service can be a member service, and through the setting of the service, the safety can be ensured on the one hand, and on the other hand, the member function can be realized.
Optionally, the embodiment of the application may further implement a recording function, and in the recording process, the video may be compressed and/or cut to obtain more sophisticated video content.
Alternatively, the large screen may include at least one screen, for example, the first area and the second area may correspond to 2 screens, and may further implement multi-screen projection, where content between different screens may be spliced or displayed separately.
It can be seen that, the screen-throwing display control method described in the embodiment of the present application is applied to an electronic device, where the electronic device implements screen-throwing display through a large screen, where the large screen includes a first area and a second area, video content of a target object is displayed in the first area, text content of the target object is displayed in the second area, N first key points of the video content are determined, N is an integer greater than 1, N second key points of the text content are configured according to the N first key points, the N first key points are in one-to-one correspondence with the N second key points, synchronous playing of the video content and the text content is implemented according to the N first key points and the N second key points, and the 2 areas can be divided to display the video content and the text content respectively under one large screen, and through the key points, synchronous playing between the two is implemented, so that the video content and the text content can be played simultaneously, thereby improving conference effects and user experience.
In accordance with the embodiment shown in fig. 1, please refer to fig. 2, fig. 2 is a schematic flow chart of another method for controlling screen display, which is provided in the embodiment of the present application, and is applied to an electronic device, where the electronic device implements screen display through a large screen, and the large screen includes a first area and a second area, and the method for controlling screen display includes:
201. and acquiring target attribute parameters of a display screen of the user side.
202. And determining a first size parameter of a first area of the large screen according to the target attribute parameter.
203. And determining a second size parameter of a second area of the large screen according to the first size parameter and the target attribute parameter.
204. And displaying the video content of the target object in the first area.
205. And displaying the text content of the target object in the second area.
206. N first key points of the video content are determined, wherein N is an integer greater than 1.
207. And configuring N second key points of the text content according to the N first key points, wherein the N first key points are in one-to-one correspondence with the N second key points.
208. And synchronously playing the video content and the text content according to the N first key points and the N second key points.
The specific description of the steps 201 to 208 may refer to the corresponding steps of the screen display control method described in fig. 1, and are not repeated herein.
It can be seen that, in the embodiment of the present application, the screen-throwing display control method is applied to an electronic device, where the electronic device implements screen-throwing display through a large screen, the large screen includes a first area and a second area, a target attribute parameter of a display screen at a user end is obtained, a first size parameter of the first area of the large screen is determined according to the target attribute parameter, a second size parameter of the second area of the large screen is determined according to the first size parameter and the target attribute parameter, video content of a target object is displayed in the first area, text content of the target object is displayed in the second area, N first key points of the video content are determined, N is an integer greater than 1, N second key points of the text content are configured according to the N first key points, the N first key points and the N second key points are in one-to-one correspondence, synchronous playing of the video content and the text content is implemented according to the N first key points and the N second key points, the video content and the text content can be respectively displayed in the 2 areas under a large screen, and the synchronous playing between the two is implemented, so that the video content and the text content and the conference content can be simultaneously played, and the user experience can be improved.
In accordance with the foregoing embodiments, referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, as shown in the fig. 3, where the electronic device includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and in the embodiment of the present application, the electronic device implements a projection display through a large screen, where the large screen includes a first area and a second area, and the programs include instructions for executing the following steps:
displaying video content of a target object in the first area;
displaying the text content of the target object in the second area;
determining N first key points of the video content, wherein N is an integer greater than 1;
configuring N second key points of the text content according to the N first key points, wherein the N first key points are in one-to-one correspondence with the N second key points;
and synchronously playing the video content and the text content according to the N first key points and the N second key points.
Optionally, the above program further comprises instructions for performing the steps of:
Acquiring target voice information;
extracting keywords from the target voice information to obtain target keywords;
positioning the text content according to the target keyword to obtain a target second key point, wherein the target key point is one second key point in the N second key points;
determining a target first key point corresponding to the target second key point;
positioning the video content according to the first target key point to obtain a positioning position;
and jumping to the positioning position to play the video content.
Optionally, the above program further comprises instructions for performing the steps of:
acquiring a selection instruction of the video content;
responding to the selection instruction, obtaining a target video frame, and obtaining a reference first key point corresponding to the target video frame;
determining a reference second key point according to the reference first key point;
and jumping the text content to a corresponding position according to the reference second key point, and displaying the text content at the corresponding position.
Optionally, in the acquiring the selection instruction of the video content, the program includes instructions for:
Displaying N first key points;
selecting a first key point i, wherein the first key point i is one first key point in the N first key points i;
and generating the selection instruction, wherein the selection instruction is used for selecting the video content corresponding to the first key point i.
Optionally, the above program further comprises instructions for performing the steps of:
acquiring target attribute parameters of a display screen of a user side;
determining a first size parameter of a first area of the large screen according to the target attribute parameter;
and determining a second size parameter of a second area of the large screen according to the first size parameter and the target attribute parameter.
Optionally, the above program further comprises instructions for performing the steps of:
receiving a full screen display instruction of the first area;
maximizing the first area and hiding the second area;
and displaying the video content of the target object through the first area.
Optionally, the above program further comprises instructions for performing the steps of:
acquiring a first playing parameter of the video content and a second playing parameter of the text content;
determining a target speech rate of a speaker;
Determining a target adjustment parameter corresponding to the target speech rate;
adjusting the first playing parameters according to the target adjusting parameters to obtain target first playing parameters;
adjusting the second playing parameters according to the target first playing parameters to obtain target second playing parameters;
and playing the video content according to the target first playing parameter, and playing the text content according to the target second playing parameter.
It can be seen that, in the electronic device described in the embodiment of the present application, the screen projection display is implemented by using a large screen, where the large screen includes a first area and a second area, video content of a target object is displayed in the first area, text content of the target object is displayed in the second area, N first key points of the video content are determined, N is an integer greater than 1, N second key points of the text content are configured according to the N first key points, the N first key points are in one-to-one correspondence with the N second key points, synchronous playing of the video content and the text content is implemented according to the N first key points and the N second key points, and the video content and the text content can be respectively displayed in the 2 areas under one large screen, and synchronous playing between the two is implemented by using the key points, so that video content and text content can be played simultaneously, thereby improving conference effects and user experience.
Fig. 4 is a functional unit block diagram of a screen display control apparatus 400 according to an embodiment of the present application, where the apparatus 400 is applied to an electronic device, and the electronic device implements a screen display through a large screen, where the large screen includes a first area and a second area, and the apparatus 400 includes: a display unit 401, a determination unit 402, and a playback unit 403, wherein,
the display unit 401 is configured to display video content of a target object in the first area; and displaying text content of the target object in the second area;
the determining unit 402 is configured to determine N first keypoints of the video content, where N is an integer greater than 1; configuring N second key points of the text content according to the N first key points, wherein the N first key points are in one-to-one correspondence with the N second key points;
the playing unit 403 is configured to implement synchronous playing of the video content and the text content according to the N first keypoints and the N second keypoints.
Optionally, the apparatus 400 is further specifically configured to:
acquiring target voice information;
extracting keywords from the target voice information to obtain target keywords;
Positioning the text content according to the target keyword to obtain a target second key point, wherein the target key point is one second key point in the N second key points;
determining a target first key point corresponding to the target second key point;
positioning the video content according to the first target key point to obtain a positioning position;
and jumping to the positioning position to play the video content.
Optionally, the apparatus 400 is further specifically configured to:
acquiring a selection instruction of the video content;
responding to the selection instruction, obtaining a target video frame, and obtaining a reference first key point corresponding to the target video frame;
determining a reference second key point according to the reference first key point;
and jumping the text content to a corresponding position according to the reference second key point, and displaying the text content at the corresponding position.
Optionally, in the aspect of obtaining the selection instruction for the video content, the apparatus 400 is specifically configured to:
displaying N first key points;
selecting a first key point i, wherein the first key point i is one first key point in the N first key points i;
and generating the selection instruction, wherein the selection instruction is used for selecting the video content corresponding to the first key point i.
Optionally, the apparatus 400 is further specifically configured to:
acquiring target attribute parameters of a display screen of a user side;
determining a first size parameter of a first area of the large screen according to the target attribute parameter;
and determining a second size parameter of a second area of the large screen according to the first size parameter and the target attribute parameter.
Optionally, the apparatus 400 is further specifically configured to:
receiving a full screen display instruction of the first area;
maximizing the first area and hiding the second area;
and displaying the video content of the target object through the first area.
Optionally, the apparatus 400 is further specifically configured to:
acquiring a first playing parameter of the video content and a second playing parameter of the text content;
determining a target speech rate of a speaker;
determining a target adjustment parameter corresponding to the target speech rate;
adjusting the first playing parameters according to the target adjusting parameters to obtain target first playing parameters;
adjusting the second playing parameters according to the target first playing parameters to obtain target second playing parameters;
and playing the video content according to the target first playing parameter, and playing the text content according to the target second playing parameter.
It can be seen that, the screen-throwing display control device described in the embodiment of the present application is applied to an electronic device, where the electronic device implements screen-throwing display through a large screen, where the large screen includes a first area and a second area, where video content of a target object is displayed in the first area, text content of the target object is displayed in the second area, N first key points of the video content are determined, N is an integer greater than 1, N second key points of the text content are configured according to the N first key points, the N first key points are in one-to-one correspondence with the N second key points, synchronous playing of the video content and the text content is implemented according to the N first key points and the N second key points, and 2 areas can be divided under one large screen to display the video content and the text content respectively, and synchronous playing between the two is implemented through the key points, so that video content and the text content can be played simultaneously, thereby improving conference effects and user experience.
It may be understood that the functions of each program module of the screen display control device of the present embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the relevant description of the foregoing method embodiment, which is not repeated herein.
The embodiment of the application also provides a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, where the computer program causes a computer to execute part or all of the steps of any one of the methods described in the embodiments of the method, where the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the methods described in the method embodiments above. The computer program product may be a software installation package, said computer comprising an electronic device.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing has outlined rather broadly the more detailed description of embodiments of the present application, wherein specific examples are provided herein to illustrate the principles and embodiments of the present application, the above examples being provided solely to assist in the understanding of the methods of the present application and the core ideas thereof; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. The screen-throwing display control method is characterized by being applied to electronic equipment, wherein the electronic equipment realizes screen-throwing display through a large screen, the large screen comprises a first area and a second area, and the method comprises the following steps:
displaying video content of a target object in the first area;
Displaying the text content of the target object in the second area;
marking the video content at preset time intervals, and determining N first key points of the video content, wherein the N first key points comprise keywords of video pictures at corresponding positions, and N is an integer greater than 1;
configuring N second key points of the text content according to the N first key points, wherein the N first key points are in one-to-one correspondence with the N second key points;
and synchronously playing the video content and the text content according to the N first key points and the N second key points.
2. The method according to claim 1, wherein the method further comprises:
acquiring target voice information;
extracting keywords from the target voice information to obtain target keywords;
positioning the text content according to the target keyword to obtain a target second key point, wherein the target second key point is one second key point in the N second key points;
determining a target first key point corresponding to the target second key point;
positioning the video content according to the first target key point to obtain a positioning position;
And jumping to the positioning position to play the video content.
3. The method according to claim 1, wherein the method further comprises:
acquiring a selection instruction of the video content;
responding to the selection instruction, obtaining a target video frame, and obtaining a reference first key point corresponding to the target video frame;
determining a reference second key point according to the reference first key point;
and jumping the text content to a corresponding position according to the reference second key point, and displaying the text content at the corresponding position.
4. The method of claim 3, wherein the obtaining the selection instruction for the video content comprises:
displaying N first key points;
selecting a first key point i, wherein the first key point i is one first key point in the N first key points i;
and generating the selection instruction, wherein the selection instruction is used for selecting the video content corresponding to the first key point i.
5. The method according to any one of claims 1-4, further comprising:
acquiring target attribute parameters of a display screen of a user side;
determining a first size parameter of a first area of the large screen according to the target attribute parameter;
And determining a second size parameter of a second area of the large screen according to the first size parameter and the target attribute parameter.
6. The method according to any one of claims 1-4, further comprising:
receiving a full screen display instruction of the first area;
maximizing the first area and hiding the second area;
and displaying the video content of the target object through the first area.
7. The method according to any one of claims 1-4, further comprising:
acquiring a first playing parameter of the video content and a second playing parameter of the text content;
determining a target speech rate of a speaker;
determining a target adjustment parameter corresponding to the target speech rate;
adjusting the first playing parameters according to the target adjusting parameters to obtain target first playing parameters;
adjusting the second playing parameters according to the target first playing parameters to obtain target second playing parameters;
and playing the video content according to the target first playing parameter, and playing the text content according to the target second playing parameter.
8. The utility model provides a throw screen display control device, its characterized in that is applied to electronic equipment, electronic equipment realizes throwing the screen display through big screen, big screen includes first region and second region, the device includes: a display unit, a determination unit and a play unit, wherein,
the display unit is used for displaying video content of a target object in the first area; and displaying text content of the target object in the second area;
the determining unit is used for marking the video content at preset time intervals, determining N first key points of the video content, wherein the N first key points comprise keywords of video pictures at corresponding positions, and N is an integer larger than 1; configuring N second key points of the text content according to the N first key points, wherein the N first key points are in one-to-one correspondence with the N second key points;
and the playing unit is used for realizing synchronous playing of the video content and the text content according to the N first key points and the N second key points.
9. An electronic device comprising a processor, a memory for storing one or more programs and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-7.
CN202210819678.7A 2022-07-13 2022-07-13 Screen-throwing display control method, electronic equipment and related device Active CN115277650B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210819678.7A CN115277650B (en) 2022-07-13 2022-07-13 Screen-throwing display control method, electronic equipment and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210819678.7A CN115277650B (en) 2022-07-13 2022-07-13 Screen-throwing display control method, electronic equipment and related device

Publications (2)

Publication Number Publication Date
CN115277650A CN115277650A (en) 2022-11-01
CN115277650B true CN115277650B (en) 2024-01-09

Family

ID=83765015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210819678.7A Active CN115277650B (en) 2022-07-13 2022-07-13 Screen-throwing display control method, electronic equipment and related device

Country Status (1)

Country Link
CN (1) CN115277650B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102084386A (en) * 2008-03-24 2011-06-01 姜旻秀 Keyword-advertisement method using meta-information related to digital contents and system thereof
WO2015192631A1 (en) * 2014-06-17 2015-12-23 中兴通讯股份有限公司 Video conferencing system and method
CN109246472A (en) * 2018-08-01 2019-01-18 平安科技(深圳)有限公司 Video broadcasting method, device, terminal device and storage medium
CN109819301A (en) * 2019-02-20 2019-05-28 广东小天才科技有限公司 Playback method and device, terminal device, the computer readable storage medium of video
CN111078070A (en) * 2019-11-29 2020-04-28 深圳市咨聊科技有限公司 PPT video barrage play control method, device, terminal and medium
CN112004138A (en) * 2020-09-01 2020-11-27 天脉聚源(杭州)传媒科技有限公司 Intelligent video material searching and matching method and device
CN112231498A (en) * 2020-09-29 2021-01-15 北京字跳网络技术有限公司 Interactive information processing method, device, equipment and medium
CN112291614A (en) * 2019-07-25 2021-01-29 北京搜狗科技发展有限公司 Video generation method and device
CN112883235A (en) * 2021-03-11 2021-06-01 深圳市一览网络股份有限公司 Video content searching method and device, computer equipment and storage medium
CN112954380A (en) * 2021-02-10 2021-06-11 北京达佳互联信息技术有限公司 Video playing processing method and device
CN112990191A (en) * 2021-01-06 2021-06-18 中国电子科技集团公司信息科学研究院 Shot boundary detection and key frame extraction method based on subtitle video
CN113206970A (en) * 2021-04-16 2021-08-03 广州朗国电子科技有限公司 Wireless screen projection method and device for video communication and storage medium
CN114218413A (en) * 2021-11-24 2022-03-22 星际互娱(北京)科技股份有限公司 Background system for video playing and video editing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11206433B2 (en) * 2019-05-08 2021-12-21 Verizon Media Inc. Generating augmented videos

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102084386A (en) * 2008-03-24 2011-06-01 姜旻秀 Keyword-advertisement method using meta-information related to digital contents and system thereof
WO2015192631A1 (en) * 2014-06-17 2015-12-23 中兴通讯股份有限公司 Video conferencing system and method
CN109246472A (en) * 2018-08-01 2019-01-18 平安科技(深圳)有限公司 Video broadcasting method, device, terminal device and storage medium
CN109819301A (en) * 2019-02-20 2019-05-28 广东小天才科技有限公司 Playback method and device, terminal device, the computer readable storage medium of video
CN112291614A (en) * 2019-07-25 2021-01-29 北京搜狗科技发展有限公司 Video generation method and device
CN111078070A (en) * 2019-11-29 2020-04-28 深圳市咨聊科技有限公司 PPT video barrage play control method, device, terminal and medium
CN112004138A (en) * 2020-09-01 2020-11-27 天脉聚源(杭州)传媒科技有限公司 Intelligent video material searching and matching method and device
CN112231498A (en) * 2020-09-29 2021-01-15 北京字跳网络技术有限公司 Interactive information processing method, device, equipment and medium
CN112990191A (en) * 2021-01-06 2021-06-18 中国电子科技集团公司信息科学研究院 Shot boundary detection and key frame extraction method based on subtitle video
CN112954380A (en) * 2021-02-10 2021-06-11 北京达佳互联信息技术有限公司 Video playing processing method and device
CN112883235A (en) * 2021-03-11 2021-06-01 深圳市一览网络股份有限公司 Video content searching method and device, computer equipment and storage medium
CN113206970A (en) * 2021-04-16 2021-08-03 广州朗国电子科技有限公司 Wireless screen projection method and device for video communication and storage medium
CN114218413A (en) * 2021-11-24 2022-03-22 星际互娱(北京)科技股份有限公司 Background system for video playing and video editing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Research on Audio-Video Synchronization of Sound and Text Messages;Chaohui Chaohui Lü;2013 Sixth International Symposium on Computational Intelligence and Design;全文 *
三视融合,三窗同屏:聋生信息化无障碍教学资源开发与应用研究――基于"课件+手语+字幕"的聋生混合教学实践探索;庞春梗;;现代职业教育(12);全文 *
微视频字幕呈现方式对学习效果影响的实证研究;王慧君;郭楠;张粉粉;;数字教育(05);全文 *

Also Published As

Publication number Publication date
CN115277650A (en) 2022-11-01

Similar Documents

Publication Publication Date Title
US10872535B2 (en) Facilitating facial recognition, augmented reality, and virtual reality in online teaching groups
US11201754B2 (en) Synchronized accessibility for client devices in an online conference collaboration
WO2017148294A1 (en) Mobile terminal-based apparatus control method, device, and mobile terminal
WO2019105467A1 (en) Method and device for sharing information, storage medium, and electronic device
CN111654715B (en) Live video processing method and device, electronic equipment and storage medium
US11265181B1 (en) Multi-point video presentations with live annotation
CN112188267B (en) Video playing method, device and equipment and computer storage medium
US11620784B2 (en) Virtual scene display method and apparatus, and storage medium
CN113518232B (en) Video display method, device, equipment and storage medium
CN108427589B (en) Data processing method and electronic equipment
CN114697721B (en) Bullet screen display method and electronic equipment
CN113542624A (en) Method and device for generating commodity object explanation video
JP4951912B2 (en) Method, system, and program for optimizing presentation visual fidelity
CN105138216A (en) Method and apparatus for displaying audience interaction information on virtual seats
CN114679621A (en) Video display method and device and terminal equipment
US11405587B1 (en) System and method for interactive video conferencing
CN114095793A (en) Video playing method and device, computer equipment and storage medium
CN115277650B (en) Screen-throwing display control method, electronic equipment and related device
CN103336649A (en) Feedback window image sharing method and device among terminals
CN110390087A (en) A kind of image processing method and device applied to PowerPoint
CN109739373B (en) Demonstration equipment control method and system based on motion trail
CN111367598B (en) Method and device for processing action instruction, electronic equipment and computer readable storage medium
CN113411532A (en) Method, device, terminal and storage medium for recording content
CN114969580A (en) Conference content recording method, device, conference system and storage medium
CN112672089A (en) Conference control and conferencing method, device, server, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant