CN112188267B - Video playing method, device and equipment and computer storage medium - Google Patents

Video playing method, device and equipment and computer storage medium Download PDF

Info

Publication number
CN112188267B
CN112188267B CN202011016763.7A CN202011016763A CN112188267B CN 112188267 B CN112188267 B CN 112188267B CN 202011016763 A CN202011016763 A CN 202011016763A CN 112188267 B CN112188267 B CN 112188267B
Authority
CN
China
Prior art keywords
video
difference
playing
client
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011016763.7A
Other languages
Chinese (zh)
Other versions
CN112188267A (en
Inventor
郜光耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yayue Technology Co ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011016763.7A priority Critical patent/CN112188267B/en
Publication of CN112188267A publication Critical patent/CN112188267A/en
Application granted granted Critical
Publication of CN112188267B publication Critical patent/CN112188267B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application discloses a video playing method, a video playing device, video playing equipment and a computer storage medium, relates to the technical field of computers, and is used for improving the simulation learning efficiency of users and reducing the energy waste of terminals. The method comprises the following steps: when a video playing instruction triggered by the operation of the video client is detected, detecting the video type of a first video to be played; when the video type of the first video is detected to be a simulated learning video, displaying simulated learning prompt information; synchronously recording a second video with a target object imitating the action posture in the first video when the first video is played based on a video recording instruction for operating and triggering the imitation learning prompt information; splitting a current display area into at least two display areas, and respectively playing the first video and the second video in different display areas.

Description

Video playing method, device and equipment and computer storage medium
Technical Field
The application relates to the technical field of computers, and provides a video playing method, a video playing device, video playing equipment and a computer storage medium.
Background
With the improvement of living standard and the development of network technology, network videos, network courses and the like are gradually popularized. For example, a user can open a dance or yoga and other moldable videos through a television or a mobile phone and other intelligent terminals, and further simulate the dance or yoga according to the motion taught in the videos.
However, when the user performs the imitation learning, the user cannot look up the own motion, so that the user cannot perceive whether the own motion, the speed and the like are standard, and further cannot visually judge the learning degree of the user, for example, the user cannot visually find which parts of the user are not enough to learn, the user may need to repeatedly practice with the video for many times, the learning efficiency is low, the user experience is poor, and the energy waste can be caused when the terminal plays the video for many times.
Disclosure of Invention
The embodiment of the application provides a video playing method, a video playing device, video playing equipment and a computer storage medium, which are used for improving the simulation learning efficiency of a user and reducing the energy waste of a terminal.
In one aspect, a video playing method is provided, which is applied to a video client, and the method includes:
when a video playing instruction triggered by the operation of the video client is detected, detecting the video type of a first video to be played;
when the video type of the first video is detected to be a simulated learning video, displaying simulated learning prompt information;
according to a video recording instruction for operating and triggering the simulated learning prompt information, a second video with a target object simulating the action posture in the first video is synchronously recorded during the playing of the first video;
and splitting the current display area into at least two display areas, and respectively playing the first video and the second video in different display areas.
In one aspect, a video playing method is provided, which is applied to a video server, and the method includes:
receiving a second video uploaded by a video client, wherein the second video is a video synchronously recorded when the video client plays a first video;
comparing the first video with the second video to obtain difference information between video pictures of the first video and the second video, wherein the difference information comprises time information of at least one video picture of which the difference between the first video and the second video reaches a set difference condition;
and responding to a video comparison and play request sent by the video client, and sending the difference information to the video client so that the video client displays difference prompt information at the play time when the difference between the video pictures of the first video and the second video reaches a set difference condition.
In one aspect, a video playing apparatus is provided, which is applied to a video client, and the apparatus includes:
the video type detection unit is used for detecting the video type of a first video to be played when a video playing instruction triggered by the operation of the video client is detected;
the information prompting unit is used for displaying simulated learning prompting information when the video type of the first video is detected to be a simulated learning video;
the synchronous recording unit is used for synchronously recording a second video of which the target object imitates the action posture in the first video when the first video is played according to a video recording instruction which is triggered by operating the imitation learning prompt information;
and the contrast playing unit is used for splitting the current display area into at least two display areas and respectively playing the first video and the second video in different display areas.
Optionally, the synchronous recording unit is specifically configured to:
responding to the video recording instruction, calling an equipment detection interface of an operating system, and detecting whether equipment where the video client side is located comprises an image acquisition device;
if the equipment where the video client is located comprises an image acquisition device, calling a player to play the first video; and the number of the first and second electrodes,
and starting the image acquisition device to record a second video of which the target object imitates the action gesture in the first video.
Optionally, the comparison playing unit is specifically configured to:
when the first video is detected to be played over or a recording ending instruction is received, ending recording the second video and displaying an operation control of the video comparison playing instruction;
responding to a video contrast playing instruction triggered by the operation of the operation control, splitting the current display area into at least two display areas, and playing the first video and the second video in different display areas respectively.
Optionally, the comparison playing unit is further configured to:
and displaying difference prompt information at the playing time when the difference between the video pictures of the first video and the second video reaches a set difference condition.
Optionally, the apparatus further includes a difference information obtaining unit, configured to obtain difference information between the first video and the second video in response to the video contrast playing instruction; the difference information comprises time information of at least one video picture with the difference reaching a set difference condition;
the comparison playing unit is used for monitoring whether the playing time indicated by any time information is reached; and displaying the difference prompt information when the playing time indicated by the information at any time is monitored to arrive.
Optionally, the difference information obtaining unit is configured to:
uploading the second video to a background server of the video client;
sending a video playing request to the background server, and receiving the difference information returned by the background server in response to the video playing request; the difference information is obtained by comparing the first video and the second video by the background server.
Optionally, the comparison playing unit is configured to:
and marking the difference position in the video pictures of the first video and the second video.
Optionally, the comparison playing unit is configured to:
in response to a difference detail presentation instruction for the video client, annotating difference locations in video frames of the first video and the second video.
Optionally, the apparatus further comprises an execution unit;
the execution unit is used for responding to a video pause instruction for the video client and synchronously pausing the first video and the second video.
Optionally, the execution unit is configured to:
in response to a video scaling instruction for the video client, scaling the video pictures of the first video and the second video synchronously at specified positions.
In one aspect, a video playing apparatus is provided, which is applied to a video server, and the apparatus includes:
the receiving and sending unit is used for receiving a second video uploaded by a video client, wherein the second video is a video synchronously recorded when the video client plays a first video;
the difference analysis unit is used for comparing the first video with the second video to acquire difference information between video pictures of the first video and the second video, wherein the difference information comprises time information of at least one video picture of which the difference between the first video and the second video reaches a set difference condition;
the transceiving unit is further configured to send the difference information to the video client in response to a video comparison and play request sent by the video client, so that the video client displays difference prompt information at a play time when a difference between video pictures of the first video and the second video reaches a set difference condition.
In one aspect, a computer device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the steps of any one of the methods are implemented.
In one aspect, a computer storage medium is provided having computer program instructions stored thereon that, when executed by a processor, implement the steps of any of the above-described methods.
In the embodiment of the application, when a video playing instruction is detected, the video type of a first video to be played is detected, when the video type is a simulated learning video, simulated learning prompt information is displayed to prompt a user video client to record and compare the video recording function and the playing function aiming at the simulated learning video, after the user operates the simulated learning prompt information, the video recording can be started, a second video is synchronously recorded when the first video is played, a display area is split, and the first video and the second video are played in different display areas. Like this, to imitate study type video, can initiatively indicate the user to imitate study, and when user imitate study, the user can also play audio-visual perception according to the contrast of video and have the difference in which part to can the pertinence imitate the exercise, promote learning efficiency, and along with user learning efficiency's improvement, the broadcast number of times of first video can corresponding reduction, thereby reduces the energy waste of video client place equipment.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or related technologies, the drawings needed to be used in the description of the embodiments or related technologies are briefly introduced below, it is obvious that the drawings in the following description are only the embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic view of a scenario provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a video playing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a display of simulated learning prompt information according to an embodiment of the present application;
fig. 4 is a schematic diagram of function prompt provided in the embodiment of the present application;
fig. 5a to 5c are schematic diagrams illustrating different positions of a playing window according to an embodiment of the present application;
fig. 6 is a schematic flow chart illustrating a difference prompt provided in the embodiment of the present application;
fig. 7 is a schematic diagram illustrating a video contrast playing operation control provided in an embodiment of the present application;
fig. 8a to 8c are schematic display diagrams of difference prompt information provided in the embodiment of the present application;
9 a-9 b are schematic diagrams of interfaces displaying a video zoom control according to embodiments of the present application;
fig. 10 is a schematic flowchart of playing a first video and a second video separately according to an embodiment of the present application;
fig. 11 is a schematic flowchart of a process of playing a first video and a second video in a composite manner according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a video playback device according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of another video playing apparatus according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application. In the present application, the embodiments and features of the embodiments may be arbitrarily combined with each other without conflict. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
For the convenience of understanding the technical solutions provided by the embodiments of the present application, some key terms used in the embodiments of the present application are explained first:
the client side comprises: in the embodiment of the application, the related client is mainly a video client, and the operating system can be provided with the video client for playing videos and comprises the video client carried by the operating system and a video client provided by a third party. The operating system is an intelligent operating system such as an IOS operating system, an Android (Android) operating system, or a Windows (Windows) operating system. The video client may further include a browser client, for example, a video website is opened in a browser, so that functions that can be realized by the corresponding client of the video website are realized through a browser webpage.
Server (Server): the Server may have a soft or hard score. From the hardware perspective, the Server may be a physically existing Server, and from the software perspective, the Server may refer to software having a Server-side function, or may refer to a Server running in a cloud. In the network, the network is constructed by countless nodes and connecting channels. From a hardware perspective, the system is constructed by numerous physical servers, terminals (such as Personal Computers (PCs) or mobile phones) and intermediate connection devices (such as network cables or routers) and from a software perspective, by numerous running server-side software and clients and their interconnection.
m3u 8: the video playing standard belongs to one of m3u standards, the encoding mode adopts utf-8, utf-8 is a file retrieval format, a video is cut into small segments of video files in a video Transport Stream (ts) format, and then the small segments of video files are stored in a server, or in order to reduce the access times of Input/Output (I/O), the small segments of video files are stored in a memory of the server, a video path is analyzed through m3u8, and then the video is requested.
Content Delivery Network (CDN): the CDN is an intelligent virtual network constructed on the basis of the existing network, and by means of edge servers deployed in various places and through functional modules of load balancing, content distribution, scheduling and the like of a central platform, a user can obtain required content nearby, network congestion is reduced, the access response speed and the hit rate of the user are improved, and key technologies of the CDN mainly include content storage and distribution technologies.
At present, when the user imitates learning following a video, the user can not see the action of the user, can not perceive whether the action, the speed and the like of the user are standard, and further can not visually judge the learning degree of the user, for example, the user can not visually find out which parts of the user are not enough to learn, the user can need to repeatedly follow the video for practice for many times, the learning efficiency is low, the user experience is not good, and the terminal plays the video for many times, so that the energy waste can be generated.
And when the user follows the video and imitates the study, can record the imitate video of self through the camera, then watch the imitate video of recording, judge whether the action meets the requirement, but the user only according to the video of recording, can't audio-visual observation go out with the difference between the original video, for example action amplitude and execution speed etc. all are difficult to compare, therefore also inconvenient see out the difference to lead to imitating the study inefficiency.
Based on this, in the method, when a video playing instruction is detected, the video type of a first video to be played is detected, when the video type is a simulated learning video, simulated learning prompt information is displayed to prompt a user that a video client includes a video recording function and a contrast playing function for the simulated learning video, and after the user operates the simulated learning prompt information, video recording can be started to synchronously record a second video when the first video is played, a display area is split, and the first video and the second video are played in different display areas. Like this, to imitate study type video, can initiatively indicate the user to imitate study, and when user imitate study, the user can also play audio-visual perception according to the contrast of video and experience that it has the difference to can the pertinence imitate the exercise, promote learning efficiency, and along with the improvement of user learning efficiency, the broadcast number of times of first video can corresponding reduction, thereby reduces the energy waste of the equipment that video client place.
In addition, considering that the difference visible to the naked eye of the user is limited, the processing capability of the computer can be combined with the image technology to compare the video pictures in the two videos, so as to find the difference of the video pictures, and the difference prompt information can be displayed when the videos are played, so that the user can intuitively perceive that the difference exists between the two video pictures and the difference is particularly where. Specifically, when the first video and the second video are played synchronously, if the difference between the video pictures of the first video and the second video reaches the set difference condition, the difference prompt message may be displayed. Therefore, the user can more intuitively perceive the difference according to the difference prompt, and the learning efficiency is further improved.
In addition, when the video is paused, the first video and the second video can be paused synchronously, so that the difference points can be visually compared by a user. After the pause, the user can also provide an amplification function to the user, and the user can amplify the corresponding position so as to check the details, obtain more detailed difference points and further improve the simulation learning efficiency of the user.
After introducing the design concept of the embodiment of the present application, some simple descriptions are provided below for application scenarios to which the technical solution of the embodiment of the present application can be applied, and it should be noted that the application scenarios described below are only used for describing the embodiment of the present application and are not limited. In the specific implementation process, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
The scheme provided by the embodiment of the application can be applied to most scenes needing video comparison and playing, as shown in fig. 1, the scheme provided by the embodiment of the application can be applied to a scene, and the scene can include a terminal device 10 and a server 11.
The terminal device 10 may have a video client installed therein to play video through the video client, where the video client 101 may refer to a client dedicated for playing video, or may be other application software capable of opening a video website, such as a browser. The server 11 is a background server of the video client.
The terminal device 10 may be any computer device capable of installing a video client, and may be an electronic device such as a mobile phone, a PC, a smart television, a tablet computer, a notebook computer, a smart wearable device, and a vehicle-mounted device. The terminal device 10 may implement a video recording function, and may have a built-in camera 101, or may be connected to an external camera device to record video through the external camera device. For the control of the terminal device 10, the user may perform the control through an operable device of the terminal device 10, for example, through a touch panel and keys of the terminal device 10, or may perform the control through an external operating device, such as a keyboard, a mouse, or a remote controller, or may perform the control through a voice control or a body motion control mode, for example. As shown in fig. 1, taking the terminal device 10 as an example of an intelligent television, the user may control the terminal device by voice control, body movement control, or the like, or may control the terminal device by the remote controller 102.
The server 11 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform.
The terminal device 10 and the server 11 may be connected through one or more networks, where the network may be a wired network, or may also be a WIreless network, for example, the WIreless network may be a mobile cellular network, or may be a WIreless-Fidelity (WIFI) network, and certainly, other possible networks may also be used, which is not limited in this embodiment of the present invention.
In the terminal device 10, the user may start the video client to play the first video, for example, yoga or dance teaching video, and when playing the first video, the video client may detect the video type of the first video, and when the video is the mimic learning video, the user may be prompted to perform the mimic learning, and after the user confirms the mimic learning, the camera 101 may be started to synchronously record the second video that the user simulated the first video, and when recording the video or after finishing recording, the first video and the recorded second video are synchronously displayed.
Further, the video client may also upload the recorded second video to the server 11, and the server 11 may perform difference analysis on the first video and the second video, so as to obtain difference information of the first video and the second video. When the first video is played, or based on the playback operation of the user, the video client can synchronously play the first video and the second video, and in the playing process, the difference prompt is displayed at the playing moment with the difference, so that the user can intuitively perceive the difference points.
Of course, the method provided in the embodiment of the present application is not limited to be used in the application scenario shown in fig. 1, and may also be used in other possible application scenarios, and the embodiment of the present application is not limited. The functions that can be implemented by each device in the application scenario shown in fig. 1 will be described in the following method embodiments, and will not be described in detail herein.
To further illustrate the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the accompanying drawings and the detailed description. Although the embodiments of the present application provide method operation steps as shown in the following embodiments or figures, more or fewer operation steps may be included in the methods based on conventional or non-inventive efforts. In steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application. The method can be executed in sequence or in parallel according to the method shown in the embodiment or the figure when the method is executed in an actual processing procedure or a device.
Referring to fig. 2, a flowchart of a video playing method according to an embodiment of the present application is shown, where the flowchart of the method is described as follows, and the method may be implemented by combining the terminal device 10 and the server 11 shown in fig. 1, for example.
Step 201: when a video playing instruction triggered by the operation of the video client is detected, the video type of a first video to be played is detected.
For some specific types of videos, such as simulated learning videos that can perform simulated learning, for example, videos that can simulate actions such as dance or yoga, a user usually has a demand for simulated learning when playing these videos, and in order to improve the learning efficiency of the user, a function of contrasting the playing can be provided for the user. Therefore, when a user performs a video playing operation at a video client, the video type of the first video to be played can be detected based on a video playing instruction triggered by the operation.
Specifically, the detection of the video type may be a type analysis according to the video content, or a judgment according to the video tag.
Step 202: and when the video type of the first video is detected to be the imitation learning video, displaying imitation learning prompt information.
When the first video is detected to be the imitation learning video, imitation learning prompt information can be displayed so as to guide the functions of the user. As shown in fig. 3, the interface schematic diagram is used to display the simulation learning prompt information, where the simulation learning prompt information is used to prompt the video client to include a simulation learning function for the simulation learning-like video, that is, the video recording and synchronous playing functions can be performed to prompt the user whether to use the simulation learning function.
In particular, in order to facilitate the user to know that such functions exist, i.e., video recording and contrast playing functions, the user may be guided in a client version that already has such functions.
Specifically, after the user starts the video client, the new function of the version may be displayed to prompt the user how to use the new function, or after the user enters the video playing interface, the function prompt may be performed, for example, to prompt the user to press a certain case to start the video recording function.
In addition, the imitation learning video can be distinguished from other types of videos, when the video client displays the video, it can be detected that the display interface of the video client includes the imitation learning video, and then function prompt information can be displayed at a set position of the imitation learning video, so as to perform function prompt for the imitation learning video, as shown in fig. 4, the diagram is a schematic diagram for performing function prompt. The video thumbnail display interface displays 4 videos, that is, videos 1 to 4, where the videos 1 and 4 are simulative videos, so that the function prompt control can be displayed at the set positions of the videos 1 and 4, and fig. 4 shows the video thumbnail at the lower right corner of the video thumbnail by way of example, but in specific implementation, the display position of the function prompt control can be set according to specific situations, which is not limited in the embodiment of the present application.
Step 203: and according to a video recording instruction triggered by operating the simulation learning prompt information, synchronously recording a second video of which the target object simulates the action posture in the first video when the first video is played.
In this embodiment of the application, a user may operate in view of the mimic learning prompt information, for example, may operate a yes confirmation control shown in fig. 3, and then the video client may obtain a video recording instruction triggered by the operation, so as to record a second video in which the target object mimics an action gesture in the first video in synchronization when the first video is played.
In practical application, the video recording instruction may be generated by operating the function prompt control on the interface shown in fig. 4, or may be generated by operating the video recording control displayed on the playing interface in the process of playing the first video, or may be automatically generated by recording the video when the first video of a specific type is played, where the video recording is automatically generated based on the playing operation of the first video.
In the embodiment of the application, the second video can be synchronously recorded when the first video is played, the first video can be a video which can simulate learning such as yoga or dance, and the second video can be a simulated video which can simulate the action posture of the first video for learning by a user, so that whether the simulated learning of the second video meets certain requirements or not can be judged later on the basis of the second video.
When a video recording instruction is detected, the video client can respond to the video recording instruction, call an equipment detection interface of an operating system of the terminal equipment, detect whether the terminal equipment comprises an image acquisition device, and if the terminal equipment comprises the image acquisition device, the video recording can be carried out, so that the image acquisition device can be started to carry out the video recording. If the user performs the video recording operation on the video thumbnail interface shown in fig. 3, in order to reduce the user operation, when it is detected that the terminal device includes the image capture device, the player may be directly invoked to play the first video selected by the user, so that the user does not need to perform the playing operation by himself, and the operation steps of the user are reduced.
Specifically, when video recording is performed, a pre-assembled video recording code may be called to execute a video recording process. As an example described below, is an example of a video recording code. In the code example, a video file to be recorded is created in advance, and the created video file is stored in a specified directory, such as "test.mp 4" in the following example, and in order to detect whether the terminal device has a video recording capability, it is necessary to detect whether the terminal device has a camera, the number of image capturing devices included in the terminal device can be detected by setting an intent function, and when the number of image capturing devices is greater than 0, it is indicated whether the terminal device has the video recording capability.
"// creating an output File
File file=new File(Environment.getExternalStorageDirectory(),"test.mp4");
// store under root directory of sd card
Uri outputFileUri=Uri.fromFile(file);
// generating Intent.
Intent intent=new Intent(MediaStore.ACTION_VIDEO_CAPTURE);
intent.putExtra(MediaStore.EXTRA_OUTPUT,outputFileUri);
// list of intent can be processed according to intent query, if the list size is >0, it indicates that the device has camera installed
List<>resolves=context.getPackageManager().queryIntentActivities(intent,0);”
The image acquisition device included in the terminal equipment can be a camera device included in the terminal equipment, such as a camera and the like, and can also be an external camera device connected with the terminal equipment.
If the number of the image acquisition devices is determined to be larger than 0, the terminal equipment is indicated to have the video recording capability, so that a recording program can be called to record videos, as shown in the following code example.
private static final int RECORD_VIDEO_SAVE=1;
// starting a camera application
startActivityForResult(intent,RECORD_VIDEO_SAVE);
After the video recording is performed through the above process, a second video can be obtained.
Step 204: splitting a current display area into at least two display areas, and respectively playing the first video and the second video in different display areas.
In the embodiment of the application, the recorded second video can be displayed in real time while being recorded, that is, when a user confirms to start the imitation learning function, the current display area is split into at least two display areas, and the first video and the second video are respectively played in different display areas.
In addition, after the recording of the second video is finished, the current display area can be split into at least two display areas, and the first video and the second video are respectively played in different display areas.
Or after the recording of the second video is finished, splitting the current display area into at least two display areas based on the playback operation of the user, and playing the first video and the second video in different display areas respectively.
In specific implementation, when the first video and the second video are played synchronously, different players can be called to play the first video and the second video respectively, and different players occupy different display areas. Thus, the first video and the second video can be played through different player windows.
As exemplified in fig. 5a to 5c, there are several different playback window position diagrams. The playing windows corresponding to the first video and the second video may be arranged side by side according to the manner shown in fig. 5a, or arranged side by side according to the manner shown in fig. 5b, or may be played according to the manner shown in fig. 5c, that is, one player window is smaller and covers behind the other player window, and of course, other possible player arrangement manners may also be adopted, which is not limited in this embodiment of the present application.
In specific implementation, the first video and the second video can be synthesized, so that when contrast playing is performed, one player can be directly called to play the synthesized video, and video pictures of the first video and the second video occupy different display areas. The first video and the second video may be synthesized by a video client or a background server, which is not limited in the embodiments of the present application. For the video synthesis method, the background server may perform difference analysis before video synthesis, or may perform difference analysis after video synthesis, and if the difference analysis is performed on the synthesized video, the difference analysis is specifically performed on the picture of each frame in the synthesized video.
Specifically, when the first video and the second video are combined, the screens of the first video and the second video may be arranged in a predetermined manner, and the combined video may be arranged in an arrangement manner shown in fig. 5a to 5c, similar to the arrangement manner shown in fig. 5a to 5 c. The preset arrangement mode may be a default setting of the video client, may also be a setting selected by the user, or may also be an adjustment mode adjusted by the user on the adjustment interface.
In the embodiment of the application, in order to enable a user to more intuitively perceive the difference between the first video and the second video, a difference prompt function can be further provided. As shown in fig. 6, a schematic flow chart of the difference prompt is shown.
Step 601: and the video client synchronously records the second video when the first video is played.
Step 602: and the video client uploads the second video to the background server.
Step 603: and the background server compares the first video with the second video to obtain the difference information between the video pictures of the first video and the second video.
In the embodiment of the present application, after the second video is obtained, the first video and the second video may be subjected to picture comparison, so as to obtain difference information between video pictures of the first video and the second video.
Specifically, the terminal device may compare the first video and the second video to obtain difference information between video frames of the first video and the second video. Or, in order to excessively utilize the computing resource of the terminal device and cause the terminal device to be jammed, the second video may also be uploaded to the background server, and a process of difference analysis is executed by the background server, that is, picture comparison is performed on the first video and the second video, so as to obtain difference information between video pictures of the first video and the second video.
Specifically, when the second video is uploaded, the identification information of the first video may also be uploaded to the background server, so that the background server determines the first video according to the identification information. The identification information may be, for example, a playing address or a video name of the first video. The uploading of the second video may be to upload the recorded video clip while recording in the recording process of the second video, and the background server may also perform difference analysis by comparing the uploaded video clip with the corresponding video clip of the first video.
Because the image frames in the first video and the second video may be out of synchronization, for example, the second video is recorded only when the first video is played to a certain moment, and the duration of the first video and the duration of the complete second video do not correspond to the frame at the same playing moment, before the difference analysis is performed on the first video and the second video, the frame registration may be performed on the first video and the second video, so that the corresponding frames are compared, and the accuracy of the difference analysis is improved.
Specifically, during picture registration, registration can be performed by using the playing time of the first video when recording starts, and a video segment with the closest similarity to the second video can be found from the first video by comparing according to an actual picture.
In the embodiment of the application, when difference analysis is performed, a difference analysis model trained in advance can be used for analyzing corresponding video pictures in the first video and the second video, so that difference points can be found. The difference points mainly aim at the action difference of the moving objects in the video, so that difference labeling can be carried out on the moving objects in training samples when the difference analysis model is trained.
Step 604: and the video client responds to the video comparison and play instruction to acquire the difference information between the first video and the second video.
In the embodiment of the application, in order to facilitate the intuitive perception of the difference between the first video and the second video by the user, the first video and the second video can be played in a contrasting manner.
In a specific implementation process, the generation of the contrast playing instruction for instructing the contrast playing of the first video and the second video may include, but is not limited to, the following cases:
(1) and when the end of the playing of the first video is detected, the recording of the second video is ended, and a comparison playing instruction is automatically generated. Namely, after the second video recording technology, the comparison and playing can be automatically carried out.
(2) When the end of the first video playing is detected, the recording of the second video is finished, or when the end of the recording of the second video is indicated, a video comparison playing operation control is displayed on a video client interface, a user can operate the video comparison playing operation control, and correspondingly, when the video client detects the video comparison playing operation of the user, a comparison playing instruction can be correspondingly generated.
As shown in fig. 7, a schematic diagram showing a video contrast playing operation control is shown. After the first video recording is finished, prompt characters such as "recording is finished and whether playing is performed" shown in fig. 7 may be displayed on the display interface of the video client, and when the user confirms playing, a comparison playing instruction may be generated. Or, a contrast play button like the upper right corner of fig. 7 may also be displayed on the display interface, and after the user operates the contrast play button, the video client may also generate a contrast play instruction based on the operation of the user.
In the embodiment of the application, after the video client detects the video contrast playing instruction, the video client can respond to the video contrast playing instruction to acquire the difference information between the first video and the second video.
Specifically, after the background server performs the difference analysis, the background server may directly return the difference information to the video client, and the video client may store the difference information, so that the video client may find the difference information corresponding to the first video and the second video from the stored difference information based on the video comparison and play instruction.
Specifically, the video client may further send a video playing request to the background server, where the video playing request is used to notify the background server to compare and play the first video and the second video, so as to request the background server to issue difference information of the first video and the second video to the video picture. After receiving the video playing request, the background server may provide the difference information and the video stream information of the first video to the video client.
Step 605: the video client synchronously plays the first video and the second video.
In the embodiment of the application, the video client responds to the video comparison playing instruction, and then the first video and the second video can be synchronously played.
Step 606: and the video client monitors whether the playing time indicated by any time information is reached.
In the embodiment of the application, in the process of synchronously playing the first video and the second video, the playing time can be monitored according to the difference information.
Specifically, the difference information may include time information of at least one video picture whose difference meets a set difference condition. After the video client or the background server performs difference analysis, at least one video picture in the first video or the second video, the difference of which reaches the set difference condition, can be determined, and time information corresponding to the at least one video picture respectively is recorded, so that when the first video and the second video are synchronously played, monitoring can be performed according to the time information corresponding to the at least one video picture respectively, and whether the current playing time reaches the playing time indicated by any time information or not is determined.
The difference condition may mean that the difference between the moving objects in the video pictures of the first video and the second video is greater than a set difference threshold, and the difference threshold may be provided for a user to select, for example, when the user wants to have a higher requirement for the user, the difference threshold may be set to be smaller, so that once the difference between the motion simulated by the user and the motion in the original video is smaller, the user still needs to be reminded; or the adjustment of the difference threshold value can be carried out according to the learning progress of the user, for example, the difference threshold value can be set to be larger during the initial learning of the user, frequent reminding is avoided, and after the simulation degree of the user is gradually improved, the difference threshold value can be reduced, so that the requirement on the simulation degree of the user is improved, and the learning details of the user are polished.
Step 607: if the determination result in step 606 is yes, the video client displays the difference prompt message.
In the embodiment of the application, when the current playing time reaches the playing time indicated by any time information, the difference of the video pictures at the current playing time is indicated to reach the difference condition, so that the difference prompt information can be displayed on the video playing interface.
Specifically, the difference information may further include a difference position where a difference exists in the video frames of the first video and the second video, and when the difference prompt information is displayed, the position where the difference exists may be marked in the video frame.
Fig. 8a is a schematic diagram illustrating a display of the difference prompt message. And correspondingly labeling the position where the difference exists between the first video and the second video.
In specific implementation, in order to facilitate a user to compare differences more intuitively, different groups of difference points may be labeled frames with different colors or different styles, where a group of difference points refers to difference points formed by different parts in the first video and the second video.
Specifically, the labeling frame may affect the contrast check of the user, and frequent difference labeling may bring about a trouble that the video watching experience is not good to the user, so that when the difference prompt information is displayed, the difference prompt information with the difference prompt in the current video picture may be output first. As shown in fig. 8b and 8c, schematic diagrams of displaying the difference prompt information are shown, where fig. 8b is a case where the first video and the second video are played separately for two players, and fig. 8c is a case where the composite video is played for one player. In addition, the difference prompt may also prompt in another display area other than the display area where the first video and the second video are located, for example, the current display area may be divided into 3 display areas, where two display areas are used for displaying the first video and the second video, and the other display area is used for displaying the difference prompt information.
After the difference prompt is performed, the user may know that the action difference of the current video picture is large, and the user may pause to carefully view the difference position, or the user may instruct to display the difference details through a specific key on a remote controller, or a difference detail display control may be displayed on an interface, and after the user operates the difference detail display control, the difference positions in the video pictures of the first video and the second video are labeled, so that a difference detail display instruction is generated based on the operation of the user, and the video client performs labeling on the difference positions in the video pictures of the first video and the second video in response to the difference detail display instruction, that is, labeling is performed as shown in fig. 8 a.
In the embodiment of the application, when difference prompt or difference annotation is carried out, pause can be automatically carried out so that a user can carefully check difference points; of course, the playback may also be set to continue without pausing or set to reduce the playback speed, and the like, which is not limited in the embodiment of the present application.
In the embodiment of the application, for the situation that the two players respectively play the first video and the second video, the user can also perform pause operation in the process of synchronously playing the first video and the second video. Specifically, the user can perform pause operation in a voice mode, a remote controller mode, a keyboard mode, a mouse mode, a touch screen mode or a remote controller mode, correspondingly, the video client side can generate a video pause instruction based on the pause operation of the user, and synchronously pause the first video and the second video in response to the video pause instruction, so that the user can carefully check the difference points.
In addition, after the first video and the second video are synchronously paused, the user may further perform a zoom operation on the videos, for example, press a specific key corresponding to a video zoom function on a remote controller, and then the video client may generate a video zoom instruction based on the zoom operation of the user, and zoom a specified position in the video frame of the first video and/or the video frame of the second video in response to the video zoom instruction, for example, the first video and the second video may be synchronously zoomed, or the first video and the second video may also be respectively zoomed.
In a specific implementation process, after the video is paused, a video zooming control can be displayed on the display interface, and a user can operate the video zooming control, so that zooming control on the first video and/or the second video is realized. As shown in fig. 9a and 9b, the interface is a schematic diagram showing a video zoom control. As shown in fig. 9a, video zooming controls can be respectively displayed in two player interfaces, so that a user can respectively operate the video zooming controls in the player interfaces, and further respectively zoom the first video or the second video; alternatively, as in fig. 9b, a video zoom control may be displayed in one of the player interfaces, and then the user may operate the video zoom control to implement the synchronous zoom control on the first video and the second video.
The following describes aspects of embodiments of the present application with specific examples.
As shown in fig. 10, a schematic flow chart of playing a first video and a second video separately for a player is shown.
Step 1001: when the video client detects a playing instruction of the first video, the video client detects the video type of the first video.
Step 1002: and when the video type of the first video is the imitation learning video, displaying imitation learning prompt information.
Step 1003: and synchronously recording a second video when the first video is played according to a video recording instruction triggered by operating the simulation learning prompt information.
Step 1004: and the video client uploads the second video to the background server and carries the identification information of the first video.
Step 1005: and the background server performs difference analysis to obtain difference information. The difference information comprises difference points of video pictures in the first video and the second video and time information corresponding to the difference points.
Step 1006: and the background server returns the difference information to the video client.
Step 1007: and the video client synchronously plays the first video and the second video.
Step 1008: and displaying the difference prompt information when the time corresponding to the difference point is played.
Fig. 11 is a schematic flowchart illustrating a process of synthesizing and playing the first video and the second video.
Step 1101: when the video client detects a playing instruction of the first video, the video client detects the video type of the first video.
Step 1102: and when the video type of the first video is the imitation learning video, displaying imitation learning prompt information.
Step 1103: and synchronously recording a second video when the first video is played according to a video recording instruction triggered by operating the simulation learning prompt information.
Step 1104: and the video client sends a video synthesis request to the background server. The video synthesis request carries the recorded second video, the identification information of the first video and the account logged in on the video client.
Step 1105: and the background server performs difference analysis to obtain difference information. The difference information comprises difference points of video pictures in the first video and the second video and time information corresponding to the difference points.
Step 1106: and the background server synthesizes the first video and the second video to obtain a synthesized video.
Step 1107: and the background server returns the film source information of the synthesized video to the video client.
Step 1108: and the video client sends a play authentication request to the background server. And the play authentication request is to request play authentication from a background server when the video client with the logged-in account requests to play the synthesized video.
Step 11011: and the background server returns a playing address to the video client. The playing address is the playing address of the synthesized video.
Step 1110: the video client acquires an m3u8 video file from the CDN network based on the playing address and requests to download the ts stream of the composite video.
Of course, in the implementation process, other video file formats may be adopted, and here, m3u8 is specifically taken as an example.
Step 1111: the CDN network returns the ts stream to the video client.
Step 1112: and the video client plays the synthesized video.
Step 1113: and displaying the difference prompt information when the time corresponding to the difference point is played.
To sum up, through comparing the broadcast with source video and imitation video to show the difference suggestion, more audio-visual suggestion user difference point is favorable to promoting user imitation learning efficiency, promotes video client's user experience. The method provided by the embodiment of the application can be used for mobile terminals such as mobile phones and terminals such as smart televisions, so that the user experience is improved, and meanwhile, the energy consumption of the terminals is correspondingly reduced.
Referring to fig. 12, based on the same inventive concept, an embodiment of the present application further provides a video playing apparatus 120, including:
the video type detection unit 1201 is configured to detect a video type of a first video to be played when a video playing instruction triggered by an operation on a video client is detected;
an information prompt unit 1202, configured to display simulated learning prompt information when the video type of the first video is detected to be a simulated learning video;
a synchronous recording unit 1203, configured to synchronously record, according to a video recording instruction triggered by operating the simulation learning prompt information, a second video in which the target object simulates an action gesture in the first video when the first video is played;
the contrast playing unit 1204 is configured to split the current display area into at least two display areas, and play the first video and the second video in different display areas, respectively.
Optionally, the synchronous recording unit 1203 is specifically configured to:
responding to a video recording instruction, calling an equipment detection interface of an operating system, and detecting whether equipment where a video client is located comprises an image acquisition device;
if the equipment where the video client is located comprises an image acquisition device, calling a player to play a first video; and the number of the first and second electrodes,
and starting an image acquisition device to record a second video of which the target object imitates the action gesture in the first video.
Optionally, the comparison playing unit 1204 is specifically configured to:
when the first video playing is detected to be finished or a recording finishing instruction is received, finishing recording the second video and displaying an operation control of the video comparison playing instruction;
responding to a video comparison playing instruction triggered by the operation of the operation control, splitting the current display area into at least two display areas, and respectively playing the first video and the second video in different display areas.
Optionally, the comparison playing unit 1204 is further configured to:
and displaying the difference prompt information at the playing time when the difference between the video pictures of the first video and the second video reaches the set difference condition.
Optionally, the apparatus further includes a difference information obtaining unit 1205, configured to obtain difference information between the first video and the second video in response to the video contrast playing instruction; the difference information comprises time information of at least one video picture with the difference reaching a set difference condition;
the comparison playing unit is used for monitoring whether the playing time indicated by any time information is reached; and displaying the difference prompt information when the playing time indicated by the information at any time is monitored to arrive.
Optionally, the difference information obtaining unit 1205 is configured to:
uploading the second video to a background server of the video client;
sending a video playing request to a background server, and receiving difference information returned by the background server in response to the video playing request; the difference information is obtained by comparing the first video with the second video through the background server.
Optionally, the comparison playing unit 1204 is configured to:
and marking the difference position in the video pictures of the first video and the second video.
Optionally, the comparison playing unit 1204 is configured to:
in response to the difference detail presentation instruction for the video client, a difference location in the video pictures of the first video and the second video is annotated.
Optionally, the apparatus further comprises an execution unit 1206;
and the execution unit is used for responding to a video pause instruction aiming at the video client and synchronously pausing the first video and the second video.
Optionally, the executing unit 1206 is configured to:
and in response to a video scaling instruction for the video client, scaling the video pictures of the first video and the second video synchronously at specified positions.
The apparatus may be configured to execute a method executed by the video client side in the methods shown in the embodiments shown in fig. 2 to 11, and therefore, for functions and the like that can be realized by each functional module of the apparatus, reference may be made to the description of the embodiments shown in fig. 2 to 11, which is not repeated.
Referring to fig. 13, based on the same inventive concept, an embodiment of the present application further provides a video playing apparatus 130, including:
the transceiving unit 1301 is configured to receive a second video uploaded by the video client, where the second video is a video that is synchronously recorded when the video client plays the first video;
a difference analysis unit 1302, configured to compare the first video with the second video, and obtain difference information between video frames of the first video and the second video, where the difference information includes time information of at least one video frame of the first video and the second video, where a difference between the first video and the second video reaches a set difference condition;
the transceiving unit 1301 is further configured to send difference information to the video client in response to a video comparison and play request sent by the video client, so that the video client displays the difference prompt information at a play time when the difference between the video frames of the first video and the second video reaches a set difference condition.
The apparatus may be configured to execute a method executed by the background server side in the methods shown in the embodiments shown in fig. 2 to 11, and therefore, for functions and the like that can be realized by each functional module of the apparatus, reference may be made to the description of the embodiments shown in fig. 2 to 11, which is not described in detail.
Referring to fig. 14, based on the same technical concept, the embodiment of the present application further provides a computer device 140, which may include a memory 1401 and a processor 1402.
The memory 1401 is used for storing computer programs executed by the processor 1402. The memory 1401 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to use of the computer device, and the like. The processor 1402 may be a Central Processing Unit (CPU), a digital processing unit, or the like. The embodiment of the present application does not limit the specific connection medium between the memory 1401 and the processor 1402. In the embodiment of the present application, the memory 1401 and the processor 1402 are connected through the bus 1403 in fig. 14, the bus 1403 is represented by a thick line in fig. 14, and the connection manner between other components is merely schematic illustration and is not limited by the illustration. The bus 1403 can be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 14, but this is not intended to represent only one bus or type of bus.
The memory 1401 may be a volatile memory (RAM) such as a random-access memory (RAM); the memory 1401 may also be a non-volatile memory (non-volatile memory) such as, but not limited to, a read-only memory (rom), a flash memory (flash memory), a hard disk (HDD) or a solid-state drive (SSD), or the memory 1401 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 1401 may be a combination of the above memories.
A processor 1402, configured to execute the method executed by the video client or the backend server in the embodiments shown in fig. 2 to fig. 11 when invoking the computer program stored in the memory 1401.
In some possible embodiments, various aspects of the methods provided by this application may also be implemented in the form of a program product including program code for causing a computer device to perform the steps of the methods according to various exemplary embodiments of this application described above in this specification when the program product runs on the computer device, for example, the computer device may perform the methods performed by the video client or the backend server in the embodiments shown in fig. 2 to 11.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the related art may be embodied in the form of a software product stored in a storage medium, and including several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media capable of storing program code.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (14)

1. A video playing method is applied to a video client, and the method comprises the following steps:
displaying a display interface of the video client, wherein function prompt information is displayed at a set position of the simulated learning video in the display interface, and the function prompt information is used for indicating that the corresponding video has the simulated learning function;
when a video playing instruction triggered by the operation of the video client is detected, detecting the video type of a first video to be played;
when the video type of the first video is detected to be a simulated learning video, displaying simulated learning prompt information, wherein the simulated learning prompt information is used for prompting whether a simulated learning function is used or not;
synchronously recording a second video with a target object simulating the action gesture in the first video when the first video is played based on a video recording instruction for confirming the operation trigger on the simulated learning prompt information;
sending a video synthesis request to a background server, and receiving film source information returned by the background server; the video synthesis request carries the second video, the identification information of the first video and the account information logged on the video client;
when the first video is detected to be played over or a recording ending instruction is received, ending recording the second video and displaying an operation control of the video comparison playing instruction;
and responding to a video comparison playing instruction triggered by the operation of the operation control, acquiring a synthesized video from a CDN (content delivery network) based on a playing address, playing the synthesized video, splitting a current display area into at least two display areas, and playing video pictures of the first video and the second video in different display areas respectively.
2. The method of claim 1, wherein the synchronously recording a second video with a target object simulating an action gesture in the first video while the first video is played based on the video recording instruction triggered by the confirmation operation on the simulated learning prompt information specifically comprises:
responding to the video recording instruction, calling an equipment detection interface of an operating system, and detecting whether the equipment where the video client is located comprises an image acquisition device;
if the equipment where the video client is located comprises an image acquisition device, calling a player to play the first video; and the number of the first and second electrodes,
and starting the image acquisition device to record a second video of which the target object imitates the action gesture in the first video.
3. The method according to claim 1 or 2, wherein when splitting a current display area into at least two display areas and playing video pictures of the first video and the second video in different display areas, respectively, the method further comprises:
and displaying difference prompt information at the playing time when the difference between the video pictures of the first video and the second video reaches a set difference condition.
4. The method of claim 1, wherein the method further comprises:
responding to the video comparison playing instruction, and acquiring difference information between the first video and the second video; the difference information comprises time information of at least one video picture with the difference reaching a set difference condition;
displaying difference prompt information at the playing time when the difference information between the video pictures of the first video and the second video reaches the set difference condition, wherein the displaying step comprises the following steps:
monitoring whether the playing time indicated by any time information is reached;
and displaying the difference prompt information when the monitoring is that the playing time indicated by the information at any time arrives.
5. The method of claim 4, wherein prior to obtaining difference information between the first video and the second video in response to the video contrast play instruction, the method further comprises:
uploading the second video to a background server of the video client;
responding to the video comparison playing instruction, and acquiring difference information between the first video and the second video, including:
sending a video playing request to the background server, and receiving the difference information returned by the background server in response to the video playing request; the difference information is obtained by comparing the first video and the second video by the background server.
6. The method of claim 1, wherein displaying the difference cue information at a play time when the difference information between the video frames of the first video and the second video reaches a set difference condition comprises:
and marking the difference position in the video pictures of the first video and the second video.
7. The method according to claim 1, wherein after displaying the difference cue information at a play time when the difference information between the video pictures of the first video and the second video reaches the set difference condition, the method further comprises:
in response to a difference detail presentation instruction for the video client, annotating difference locations in video frames of the first video and the second video.
8. The method of claim 1, wherein after splitting a current display area into at least two display areas and playing video pictures of the first video and the second video, respectively, in different display areas, the method further comprises:
synchronously pausing the first video and the second video in response to a video pause instruction for the video client.
9. The method of claim 8, wherein after synchronously pausing the first video and the second video in response to a video pause instruction for the video client, the method further comprises:
in response to a video scaling instruction for the video client, scaling the video pictures of the first video and the second video synchronously at specified positions.
10. A video playing method is applied to a video server, and the method comprises the following steps:
receiving a second video uploaded by a video client, wherein the second video is a video synchronously recorded when the video client plays a first video;
comparing the first video with the second video to obtain difference information between video pictures of the first video and the second video, wherein the difference information comprises time information of at least one video picture of which the difference between the first video and the second video reaches a set difference condition;
responding to a video comparison and play request sent by the video client, sending the difference information to the video client, so that the video client displays difference prompt information at the play time when the difference between the video pictures of the first video and the second video reaches a set difference condition;
receiving a video synthesis request of the video client, wherein the video synthesis request carries the second video, the identification information of the first video and the account information logged on the video client;
and returning the film source information corresponding to the synthesized video to the video client, so that the video client finishes recording the second video and displays an operation control of a video contrast playing instruction, responds to the video contrast playing instruction triggered by operating the operation control, acquires the synthesized video from a CDN network based on a playing address and plays the synthesized video, so as to split the current display area of the video client into at least two display areas, and respectively plays video pictures of the first video and the second video in different display areas.
11. A video playing apparatus, applied to a video client, the apparatus comprising:
the video type detection unit is used for displaying a display interface of the video client, wherein function prompt information is displayed at a set position of the simulated learning video in the display interface and used for indicating that the corresponding video has the simulated learning function; when a video playing instruction triggered by the operation of the video client is detected, detecting the video type of a first video to be played;
the information prompting unit is used for displaying simulated learning prompting information when the video type of the first video is detected to be a simulated learning video, and the simulated learning prompting information is used for prompting whether a simulated learning function is used or not;
the synchronous recording unit is used for synchronously recording a second video of which the target object imitates the action posture in the first video when the first video is played according to a video recording instruction for confirming the operation trigger on the imitation learning prompt information;
the comparison playing unit is used for sending a video synthesis request to the background server and receiving the film source information returned by the background server; the video synthesis request carries the second video, the identification information of the first video and the account information logged on the video client; when the first video is detected to be played over or a recording ending instruction is received, ending recording the second video and displaying an operation control of the video comparison playing instruction; and responding to a video comparison playing instruction triggered by the operation of the operation control, acquiring a composite video from a CDN network based on a playing address, playing the composite video, splitting a current display area into at least two display areas, and playing video pictures of the first video and the second video in different display areas respectively.
12. A video playing apparatus, applied to a video server, the apparatus comprising:
the receiving and sending unit is used for receiving a second video uploaded by a video client, wherein the second video is a video synchronously recorded when the video client plays a first video;
the difference analysis unit is used for comparing the first video with the second video to acquire difference information between video pictures of the first video and the second video, wherein the difference information comprises time information of at least one video picture of which the difference between the first video and the second video reaches a set difference condition;
the transceiving unit is further configured to send the difference information to the video client in response to a video comparison and play request sent by the video client, so that the video client displays difference prompt information at a play time when a difference between video pictures of the first video and the second video reaches a set difference condition;
the receiving and sending unit is further configured to receive a video composition request of the video client, where the video composition request carries the second video, the identification information of the first video, and account information logged in on the video client; and returning the film source information corresponding to the synthesized video to the video client, so that the video client finishes recording the second video and displays an operation control of a video contrast playing instruction, responds to the video contrast playing instruction triggered by operating the operation control, acquires the synthesized video from a CDN network based on a playing address and plays the synthesized video, so as to split the current display area of the video client into at least two display areas, and respectively plays video pictures of the first video and the second video in different display areas.
13. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor,
the processor, when executing the computer program, performs the steps of the method of any one of claims 1 to 9 or 10.
14. A computer storage medium having computer program instructions stored thereon, wherein,
the computer program instructions, when executed by a processor, perform the steps of the method of any one of claims 1 to 9 or 10.
CN202011016763.7A 2020-09-24 2020-09-24 Video playing method, device and equipment and computer storage medium Active CN112188267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011016763.7A CN112188267B (en) 2020-09-24 2020-09-24 Video playing method, device and equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011016763.7A CN112188267B (en) 2020-09-24 2020-09-24 Video playing method, device and equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN112188267A CN112188267A (en) 2021-01-05
CN112188267B true CN112188267B (en) 2022-09-09

Family

ID=73957118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011016763.7A Active CN112188267B (en) 2020-09-24 2020-09-24 Video playing method, device and equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN112188267B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112667345A (en) * 2021-01-25 2021-04-16 深圳市景阳信息技术有限公司 Image display method and device, electronic equipment and readable storage medium
CN113504882B (en) * 2021-04-30 2024-02-06 惠州华阳通用电子有限公司 Multi-system multi-region display method
CN113709451A (en) * 2021-08-25 2021-11-26 北京世纪互联宽带数据中心有限公司 Video contrast playing method and device
CN114095668A (en) * 2021-10-08 2022-02-25 深圳市景阳科技股份有限公司 Video playing method, device, equipment and computer storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105025200A (en) * 2015-08-06 2015-11-04 成都市斯达鑫辉视讯科技有限公司 Method for supervising user by set top box
CN105898133A (en) * 2015-08-19 2016-08-24 乐视网信息技术(北京)股份有限公司 Video shooting method and device
CN107551521B (en) * 2017-08-17 2020-05-08 广州视源电子科技股份有限公司 Fitness guidance method and device, intelligent equipment and storage medium
CN108566519B (en) * 2018-04-28 2022-04-12 腾讯科技(深圳)有限公司 Video production method, device, terminal and storage medium
CN109432753B (en) * 2018-09-26 2020-12-29 Oppo广东移动通信有限公司 Action correcting method, device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN112188267A (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN112188267B (en) Video playing method, device and equipment and computer storage medium
CN109547819B (en) Live list display method and device and electronic equipment
CN106658200B (en) Live video sharing and acquiring method and device and terminal equipment thereof
CN110968736B (en) Video generation method and device, electronic equipment and storage medium
CN103606310B (en) Teaching method and system
US8990842B2 (en) Presenting content and augmenting a broadcast
CN111541936A (en) Video and image processing method and device, electronic equipment and storage medium
US10448081B2 (en) Multimedia information processing method, terminal, and computer storage medium for interactive user screen
US11025967B2 (en) Method for inserting information push into live video streaming, server, and terminal
JP6467554B2 (en) Message transmission method, message processing method, and terminal
WO2014178219A1 (en) Information processing device and information processing method
CN111930994A (en) Video editing processing method and device, electronic equipment and storage medium
CN110472099B (en) Interactive video generation method and device and storage medium
CN112752121B (en) Video cover generation method and device
CN111800668B (en) Barrage processing method, barrage processing device, barrage processing equipment and storage medium
CN109361954B (en) Video resource recording method and device, storage medium and electronic device
CN113573090A (en) Content display method, device and system in game live broadcast and storage medium
CN105808231B (en) System and method for recording and playing script
CN114339375A (en) Video playing method, method for generating video directory and related product
CN113132780A (en) Video synthesis method and device, electronic equipment and readable storage medium
US11553255B2 (en) Systems and methods for real time fact checking during stream viewing
US20230300429A1 (en) Multimedia content sharing method and apparatus, device, and medium
CN115237314B (en) Information recommendation method and device and electronic equipment
CN113395585B (en) Video detection method, video play control method, device and electronic equipment
CN113391745A (en) Method, device, equipment and storage medium for processing key contents of network courses

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20221117

Address after: 1402, Floor 14, Block A, Haina Baichuan Headquarters Building, No. 6, Baoxing Road, Haibin Community, Xin'an Street, Bao'an District, Shenzhen, Guangdong 518133

Patentee after: Shenzhen Yayue Technology Co.,Ltd.

Address before: 35th floor, Tencent building, Keji Zhongyi Road, high tech Zone, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

TR01 Transfer of patent right