CN115119062A - Video splitting method, display device and display method - Google Patents

Video splitting method, display device and display method Download PDF

Info

Publication number
CN115119062A
CN115119062A CN202110299096.6A CN202110299096A CN115119062A CN 115119062 A CN115119062 A CN 115119062A CN 202110299096 A CN202110299096 A CN 202110299096A CN 115119062 A CN115119062 A CN 115119062A
Authority
CN
China
Prior art keywords
video
video frame
label
tag
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110299096.6A
Other languages
Chinese (zh)
Inventor
刘晓潇
李广琴
黄利
孙锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Holding Co Ltd
Original Assignee
Hisense Group Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Group Holding Co Ltd filed Critical Hisense Group Holding Co Ltd
Priority to CN202110299096.6A priority Critical patent/CN115119062A/en
Publication of CN115119062A publication Critical patent/CN115119062A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a video splitting method, a display device and a display method, wherein each key video frame is determined from a video stream, and a set difference degree requirement is met between any key video frame and a preorder video frame of any key video frame; determining each label video frame from each key video frame; for any first label video frame in the label video frames, indicating that the displayed content of the video frame positioned between the first label video frame and the second label video frame corresponds to the same type; setting corresponding video trigger marks for each label video frame; the video trigger mark is used for jumping the video stream to the label video frame corresponding to the video trigger mark to start playing according to the user input. The mode can automatically split a video clip of a video stream, and when the video clip is specifically applied to a cosmetic mirror scene, the requirements of users for switching the video clips of specific use processes of various cosmetic products in a cosmetic video at will and according to needs can be met, and the user experience is greatly improved.

Description

Video splitting method, display device and display method
Technical Field
The embodiment of the application relates to the field of smart home, in particular to a video splitting method, a display device and a display method.
Background
With the continuous development of the technology, intelligent products are increasingly abundant, wherein the intelligent products comprise the development and the use of intelligent cosmetic mirrors. The intelligent cosmetic mirror can provide a comprehensive and rich makeup video tutorial for the user according to the analysis of the face shape and the five sense organs of the user and the requirements of the user on different makeup. However, cosmetic video tutorials are of a wide variety and typically take a long time to record, which results in a user experience that is time consuming to watch and not easy to jump quickly to a video segment that the user really wants to watch or want to learn.
At present, each makeup video tutorial can be segmented in a manual labeling mode so as to improve the experience of a user. However, the beauty video course under the eyes is infinite, and obviously, the manual labeling mode is heavy in workload and difficult to implement.
In summary, there is a need for a technique for automatically segmenting a makeup video tutorial.
Disclosure of Invention
The application provides a video splitting method, display equipment and a display method, which are used for solving the technical problems that a user consumes time when watching a makeup video tutorial and is not easy to quickly jump to a video resource which the user really wants to watch.
In a first aspect, an embodiment of the present application provides a video splitting method, where the method includes: determining each key video frame from a video stream, wherein any key video frame and a preamble video frame of any key video frame meet a set difference degree requirement; determining each label video frame from each key video frame; for any first tag video frame in the tag video frames, the first tag video frame is used for indicating that the content displayed by the video frame positioned between the first tag video frame and the second tag video frame corresponds to the same type; the second tag video frame is an adjacent tag video frame of the first tag video frame; setting corresponding video trigger marks for each label video frame; and the video trigger mark is used for skipping the video stream to the label video frame corresponding to the video trigger mark to start playing according to the user input.
Based on the scheme, each key video frame is extracted by simplifying the video stream, each label video frame is further determined, any two label video frames are not of the same type on the display content, and finally, a video trigger mark is set for each label video frame, so that when a user triggers the video trigger mark, the user can quickly switch from the video node which is watched currently to a group of video clips where the label video frame corresponding to the video trigger mark is located. The mode can automatically split a video clip of a video stream, and when the intelligent cosmetic mirror scene is achieved, the requirement that the video clip of the specific use process of each cosmetic product in a cosmetic video is switched randomly and as required by a user can be met, and the user experience is greatly improved.
In one possible implementation, the determining key video frames from the video stream includes: determining a first key video frame from the video stream; for any video frame which is positioned in the video stream and is subsequent to the first key video frame, determining the similarity between the video frame and the first key video frame, and determining the video frame with the similarity meeting the requirement of the difference degree as a second key video frame; the video frame is any video frame from the first key video frame to a video frame adjacent to the first key video frame.
Based on the scheme, for a video stream, several continuous frames of video frames are often different in display content, and therefore the video stream is reduced based on the requirement of (content) similarity, so that the video frames with the similarity meeting the requirement of the difference are taken as key video frames, and thus each key video frame corresponding to the video stream can be formed.
In one possible implementation, the video stream is a makeup video; the label video frame is a video frame with product display content.
Based on the scheme, although the makeup video is popular under the eyes, the scheme of the application can be also suitable for splitting the makeup video in relation to the playing of the makeup video; in addition, due to the characteristics of the makeup video, namely, before the makeup blogger formally uses the makeup product, the makeup blogger tends to show the makeup product to be used emphatically before taking a shot, therefore, the scheme of the application can further take the video frame with the product display content as the label video frame from the key video frame, and then take the label video frame as the node for splitting the makeup video, so that the effect of splitting one path of makeup video into one video segment with the makeup product as the node can be finally realized, and the user can rapidly jump and switch the required video content.
In one possible implementation, the determining each tagged video frame from the key video frames includes: carrying out face detection on each key video frame; and determining each label video frame from the key video frames without the human faces through a preset model.
Based on the scheme, each key video frame can comprise video contents of various situations, wherein the video frames can comprise video frames with human faces and video frames without human faces, and further, the video frames without human faces can be further divided into video frames with product display contents as main bodies and video frames with non-human face elements as main bodies; aiming at each key video frame with the characteristics, the face detection can be carried out on each key video frame, and the preset model is used, so that the video frame which takes the product display content as the main body can be identified from each key video frame to form the label video frame, so that when the video stream is divided, the video stream can be divided into video segments by the label video frame, and after the video segments are formed, a user can fast skip from the current video watching node to the video segment which the user wants to watch and learn according to the actual watching and learning requirements to watch and learn, and the user experience is improved.
In one possible implementation, the video trigger is a tag trigger key; and each label trigger key is displayed on a playing interface when the video stream is played.
Based on the scheme, as for a video stream, by setting the video trigger mark corresponding to the tag video frame as the tag trigger key, when the user clicks the tag trigger key, the user can quickly switch from the currently watching video node to a group of video clips where the tag video frame corresponding to the tag trigger key is located. The mode can automatically split a video clip of a video stream, and when the intelligent cosmetic mirror scene is achieved, the requirement that the video clip of the specific use process of each cosmetic product in a cosmetic video is switched randomly and as required by a user can be met, and the user experience is greatly improved.
In a possible implementation method, the setting, for each tag video frame, a corresponding video trigger flag includes: determining the label corresponding to each label video frame according to the audio information of the video stream; and setting a video trigger mark with a corresponding label aiming at each label video frame.
Based on the scheme, regarding a makeup video, a makeup blogger generally displays a makeup product to be used in front of a lens before using the makeup product for makeup, and introduces the makeup product in a way. Based on this, can obtain the audio information that the label video frame corresponds from the video stream, and confirm the label that the label video frame corresponds according to this audio information, can form the video trigger mark that corresponds with the label at last, thereby the user can jump to a set of video clip that the label corresponds from the video node that is watching at present fast when triggering the video trigger mark, so can satisfy the user and carry out the demand that random, as required switching to the video clip of the concrete use of each cosmetic product in a makeup and make up video, user experience has greatly been promoted.
In one possible implementation method, the setting, for each tagged video frame, a video trigger flag with a corresponding tag includes: classifying the labels to obtain various labels; setting a class video trigger mark for a tag video frame corresponding to the class tag, wherein the tag video frame corresponding to the class tag is determined according to a tag video frame corresponding to a tag positioned at the earliest playing position in the class tag; and setting a sub-video trigger mark for the label video frame corresponding to each label under the class label.
Based on the scheme, when the video trigger mark with the corresponding label is set for each label video frame, firstly, each formed label can be classified to obtain each class label; then, a class video trigger mark corresponding to each class label can be set, so that when a user triggers the class video trigger mark, the user can quickly jump to a large video segment spliced by a group of video segments corresponding to each label video frame belonging to the class label from a video node currently being watched, wherein the playing starting point of the large video segment is the label video frame corresponding to the label belonging to the class label and located at the earliest playing position in the video stream; finally, in order to further refine the video segment of the large segment, a sub-video trigger mark can be set for the tag video frame corresponding to each tag under the class of tags, so that the user can jump to watch some specific objects more directly.
In a second aspect, an embodiment of the present application provides a display device, including: a display unit and a controller; the display unit is used for displaying the video stream; the video stream comprises video trigger marks corresponding to all label video frames, and aiming at any first label video frame in all the label video frames, the first label video frame is used for indicating that contents displayed by video frames positioned between the first label video frame and a second label video frame correspond to the same type; the second tag video frame is an adjacent tag video frame of the first tag video frame; the controller configured to: responding to user input, controlling the display unit to jump the video stream to a label video frame corresponding to a video trigger mark to start playing; the video trigger markers correspond one-to-one to the user inputs.
Based on the scheme, for a display device, the display unit may be configured to display a video stream and a video trigger mark corresponding to each tag video frame in the video stream, and the controller may be configured to receive a trigger instruction for the video trigger mark sent by a user, and start playing from the tag video frame corresponding to the video trigger mark according to the trigger instruction. According to the method, through forming each label video frame corresponding to the video stream, any two label video frames are not of the same type on the display content, and through setting the video trigger marks for the label video frames, when the user triggers the video trigger marks, the user can quickly switch to a group of video clips where the label video frames corresponding to the video trigger marks are located from the video node being watched currently, and the experience of the user in the video watching process is improved.
In one possible implementation, the display device is a cosmetic mirror.
Based on the scheme, the function of splitting the video clip of the video stream can be automatically performed on the basis of the display device, when the display device specifically reaches the intelligent cosmetic mirror scene, the requirements of the user for switching the video clip of the specific use process of each cosmetic product in the cosmetic video at will and according to needs can be met, and the user experience is greatly improved.
In a third aspect, an embodiment of the present application provides a display method, including: receiving a user input; the user input is used for indicating that the video stream is jumped to a label video frame corresponding to a video trigger mark to start playing, and the video trigger mark is in one-to-one correspondence with the user input; the video stream comprises video trigger marks corresponding to all label video frames, and aiming at any first label video frame in all the label video frames, the first label video frame is used for indicating that contents displayed by video frames positioned between the first label video frame and the second label video frame correspond to the same type; the second tag video frame is an adjacent tag video frame of the first tag video frame; and playing the label video frame corresponding to the video trigger mark.
Based on the scheme, after a trigger instruction aiming at the video trigger mark sent by a user is received, the playing can be started from the label video frame corresponding to the video trigger mark according to the trigger instruction. According to the method, through forming each label video frame corresponding to the video stream, any two label video frames are not of the same type on the display content, and through setting the video trigger marks for the label video frames, when the user triggers the video trigger marks, the user can quickly switch to a group of video clips where the label video frames corresponding to the video trigger marks are located from the video node being watched currently, and the experience of the user in the video watching process is improved.
In a fourth aspect, an embodiment of the present application provides a computing device, including:
a memory for storing a computer program;
a processor for calling the computer program stored in the memory and executing the method according to the obtained program.
In a fifth aspect, the present application provides a computer-readable storage medium, which stores a computer program for causing a computer to execute the method according to any one of the first aspect and/or the third aspect.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a video splitting method according to an embodiment of the present application;
fig. 2 is a schematic diagram illustrating key video frame extraction according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a key video frame without a human face according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating a development process of a speech recognition module according to an embodiment of the present application;
fig. 5 is a schematic view of a video playing interface according to an embodiment of the present application;
fig. 6 is a display device provided in an embodiment of the present application;
FIG. 7 is a cosmetic mirror according to an embodiment of the present disclosure;
fig. 8 is a display method according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, popular intelligent cosmetic mirrors on the market can provide various makeup video courses with rich contents for users through the analysis of the facial shapes and the five sense organs of the users and according to the requirements of the users on different makeup, so that the users can select favorite makeup video courses to watch and learn according to the requirements.
In the above scenario where the user watches and learns any makeup video tutorial displayed on the intelligent cosmetic mirror, the following problems exist:
1. the makeup video is rich in content and long in video, and users do not want to watch the video for learning integrally but hope to jump to specific parts which the users want to learn for watching and learning.
2. The mastery degree of makeup skill of different users is different, and naturally, the requirement for the makeup video clip to be watched is different, for example, some users may have mastered the makeup base part skillfully, or have passed through eyebrow tattooing without needing to watch eyebrow makeup how to draw, etc.
3. The difficulty level of the makeup learning of different parts of the face is different, for example, for simpler operations (such as makeup), a user does not want to repeatedly watch the makeup, and for harder makeup skills (such as eyebrow, face trimming, eye shadow and the like), the user usually needs to repeatedly watch the makeup learning.
4. The makeup video resources are increasingly rich, each makeup video is segmented in a manual labeling mode, and the workload is heavy and is not easy to realize.
In order to solve the above technical problem, as shown in fig. 1, a video splitting method provided by an embodiment of the present application includes the following steps:
step 101, determining each key video frame from a video stream, wherein any key video frame and a preamble video frame of any key video frame meet a set requirement for a difference degree.
In this step, for any video stream, the display content of several consecutive video frames of the video stream is not very different during the recording process. In this regard, by performing the key video frame extraction operation on the video stream, the redundancy of the video stream data in the content can be reduced, so as to increase the speed of processing the data in the subsequent steps.
102, determining each label video frame from each key video frame; for any first tag video frame in the tag video frames, the first tag video frame is used for indicating that the content shown by the video frame positioned between the first tag video frame and the second tag video frame corresponds to the same type; the second tagged video frame is an adjacent tagged video frame of the first tagged video frame.
In this step, for a plurality of key video frames extracted from the video stream, each tagged video frame can be further determined therefrom. For video frames between any two adjacent tagged video frames (i.e. the first tagged video frame and the second tagged video frame), they will correspond to the same type on the presentation, and in this way, the video stream can be split into a set of video segments with the tagged video frame as the play start point.
103, setting corresponding video trigger marks for each label video frame; and the video trigger mark is used for skipping the video stream to the label video frame corresponding to the video trigger mark to start playing according to the user input.
In this step, for a group of video clips using the tag video frame as the playing start point, by setting the corresponding video trigger mark for each tag video frame, when the user watches the video stream, by triggering the video trigger mark, the currently watching video node can be quickly switched to the group of video clips using the tag video frame corresponding to the video trigger mark selected by the user as the playing start point, thereby improving the user experience.
Based on the scheme, the key video frames are extracted by simplifying the video stream, the label video frames are further determined, any two label video frames are not of the same type on the display content, and finally the video trigger marks are set for the label video frames, so that when the user triggers the video trigger marks, the user can quickly switch from the currently watched video node to a group of video clips where the label video frames corresponding to the video trigger marks are located. The mode can automatically split a video clip of a video stream, and when the intelligent cosmetic mirror scene is achieved, the requirement that the video clip of the specific use process of each cosmetic product in a cosmetic video is switched randomly and as required by a user can be met, and the user experience is greatly improved.
Some of the above steps will be described in detail with reference to examples.
In one implementation of step 101, the determining key video frames from the video stream includes: determining a first key video frame from the video stream; for any video frame which is positioned in the video stream and is subsequent to the first key video frame, determining the similarity between the video frame and the first key video frame, and determining the video frame with the similarity meeting the requirement of the difference degree as a second key video frame; the video frame is any video frame from the first key video frame to a video frame adjacent to the first key video frame.
As mentioned above, a video stream often has several consecutive frames with almost the same difference in display content, so it is very important to simplify the video stream and extract the key video frames from the video stream. The extraction of the key video frames in the video stream can reduce the redundancy of the video stream data in content, and the extraction principle can give priority to the difference between the frames, namely, the difference is simplified in quantity and the video content can be reflected. For example, the extraction of key video frames can be performed by taking the similarity between video frames as a measure basis.
There are many methods for extracting key video frames, and in the embodiment of the present application, an image feature-based method for extracting key video frames may be used, for example: first, a first frame video frame in one video stream may be used as a first key video frame, and then video frames located after the first frame video frame in the one video stream are compared with (referred to as) the first frame video frame one by one for determining whether there is a large change between the two frames. The set disparity requirement can be used as a node for dividing the key video frame: if the two video frames do not change greatly, the two video frames can be considered not to meet the set requirement of the difference degree, and the video frame currently used for comparison can not be considered as the next key video frame; otherwise, the video frame currently used for comparison may be taken as the next key video frame, at this time, the next key video frame may be referred to as the second key video frame, and the key video frame may be continuously taken as a research object subsequently, that is, the second key video frame determined in the previous round may be taken as the first key video frame of the new round, and the video frames located after the key video frame in the path of video stream are compared with the image features thereof one by one, and the above steps are repeated in a circulating manner, so that all the key video frames corresponding to the path of video stream may be obtained.
Fig. 2 is a schematic diagram illustrating extraction of a key video frame according to an embodiment of the present application. Referring to fig. 2 (a) and (b), the meaning is: and respectively extracting the first frame video frame in a group of redundant video frames where the first frame video frame and the redundant video frame are respectively positioned, so that the extracted first frame video frame can be used as a key video frame. Obviously, by means of the method for simplifying redundant video frames in the video stream, some invalid and repeated video frames can be filtered greatly, and the speed of processing data in subsequent steps can be increased conveniently.
In some implementations of the present application, the video stream is a makeup video; the label video frame is a video frame with product display content.
For example, in an application scene of the intelligent cosmetic mirror, a user can enter into a watching and learning mode of the cosmetic video by clicking various cosmetic videos displayed on the mirror surface of the intelligent cosmetic mirror. Due to the characteristics of the makeup video, namely, the makeup product is often displayed in front of a lens before the makeup product is switched and used each time in the process of recording the makeup video by a makeup blogger; based on this, the embodiment of the application can further take the video frame with the product display content as the tag video frame from the key video frame, and then take the tag video frame as the node for splitting the makeup video, so that the effect that one path of makeup video is split into one video segment with the makeup product as a segment of the node can be finally realized, and the user can rapidly jump and switch the required video content.
In some implementations of the present application, the determining each tagged video frame from the key video frames includes: carrying out face detection on each key video frame; and determining each label video frame from the key video frames without the human faces through a preset model.
For a road makeup video, when key video frames corresponding to the road makeup video are acquired by performing an extraction operation on the key video frames, the key video frames include video frames of the following two cases:
case 1: a video frame having a human face;
case 2: video frames without human faces.
For case 1, the video frame with the face often indicates that the makeup blogger is making up himself using the makeup product, and the process does not involve description of the makeup product, that is, the makeup blogger displays some makeup techniques and/or details before taking a shot, but not displays the used makeup product. Therefore, the video frames with the faces in the key video frames are not available as nodes for splitting the makeup video, so that the key video frames with the faces can be filtered, namely, the key video frames are not taken as research objects subsequently.
As an image description of the key video frame with a human face, fig. 2 may be specifically referred to.
For case 2, the video frames without human faces can be further divided into video frames with product display contents as the main body and video frames with non-human face elements as the main body. Fig. 3 is a schematic diagram of a key video frame without a human face according to an embodiment of the present disclosure. Referring to (a) and (b) of fig. 3, there are shown video frames each having a non-face element as a main body; and, referring to (c) and (d) of fig. 3, which each represent a video frame that is dominated by product display content.
Through the analysis of the key video frames under the two situations, the video frame which does not have a human face and takes the product display content as the main body can be used as a node for splitting the makeup video, namely the key video frame which takes the product display content as the main body can be used as a label video frame. In order to determine each tag video frame from each key video frame, the embodiment of the present application may be implemented by using the following method:
the first step is to perform face detection on the extracted key video frames, wherein a face detection threshold value can be set to be larger so that a video frame with only a small half face exposed is not taken as a face video frame, and a video frame with no face detected (i.e. a video frame without a face, corresponding to the foregoing case 2) is screened out, and a face video frame (i.e. a video frame with a face, corresponding to the foregoing case 1) is filtered. As a kind of vivid image display, the video frames with no human face detected by this step may refer to (a), (b), (c), and (d) in fig. 3.
Secondly, classifying the video frames which are obtained in the first step and cannot detect the human face by using a pre-trained neural network classification model, screening out key video frames which take product display contents as a main body, and taking the screened key video frames which take the product display contents as the main body as nodes for splitting a path of beauty makeup video; in this step, the key video frame with the non-human face element as the main body can be abandoned, that is, the key video frame with the non-human face element as the main body can not be used as a node for splitting a path of beauty makeup video.
Illustratively, the neural network classification model in the above example may be trained in the following manner: firstly, a plurality of sections of makeup videos can be collected, and key video frame extraction operations are respectively executed on the videos; then, aiming at the extracted key video frames, key video frames which do not identify the human face in the key video frames can be further extracted to form a data set; the data set is then divided into two categories by way of manual labeling, including: the method comprises the following steps of (1) taking product display content as a main video frame and taking non-face elements as a main video frame; finally, the convolutional neural network is trained by using the labeled data set, so that the convolutional neural network can accurately distinguish two types of video frames in the data set, and finally the so-called neural network classification model in the above example is obtained.
In some implementations of the present application, the video trigger is a tag trigger key; and each label trigger key is displayed on a playing interface when the video stream is played.
Regarding a video stream, when setting a video trigger flag for a tag video frame, the video trigger flag can be set in various ways, such as key triggering, voice triggering, and triggering based on image recognition. In order to enable a user to trigger a tag video frame more intuitively, in the embodiment of the application, a manner of key triggering may be adopted, that is, a video trigger mark may be set as a tag trigger key, and each tag trigger key corresponding to each tag video frame may be displayed on a playing interface when a video stream is played, so that, regarding a video stream being played, a user may click a tag trigger key displayed on the playing interface, and then the user may quickly switch from a video node being currently viewed to a group of video segments where the tag video frame corresponding to the tag trigger key is located. Obviously, the mode of triggering through the keys can bring good user experience.
In one implementation of step 103, the setting a corresponding video trigger flag for each tag video frame includes: determining the label corresponding to each label video frame according to the audio information of the video stream; and setting a video trigger mark with a corresponding label aiming at each label video frame.
With respect to a makeup video, a makeup blogger typically shows a makeup product to be applied before applying the makeup product to make up, and introduces the makeup product in short. Based on this, can obtain the audio information that the label video frame corresponds from the video stream, and confirm the label that the label video frame corresponds according to this audio information, can form the video trigger mark that corresponds with the label at last, thereby when the user triggers a video trigger mark, then can jump to a set of video clip that the label that this video trigger mark corresponds from the video node that is watching at present fast, so can satisfy the user and carry out the demand that random, as required switching to the video clip of the specific use of each cosmetic product in a makeup and make up video, user experience has greatly been promoted. For the foregoing example, regarding a road makeup video, some key video frames, which take product display content as a main body, in the key video frames corresponding to the road makeup video may be used as nodes for splitting the road makeup video, that is, the key video frames, which take product display content as a main body, are used as tag video frames; next, audio information N seconds before and after the tag video frame may be extracted, and further a tag corresponding to the tag video frame may be extracted from the extracted audio information, for example, the tag may be a name of a makeup product introduced by the makeup blogger and being shown to the user. In the makeup setting, the names of makeup products include, but are not limited to: concealer, foundation make-up, dusting powder, make-up powder, blush, eyebrow powder, eyebrow pencil, mascara, eye shadow, eyeliner, mascara brush, highlight, shadow, lipstick, lip gloss, lipstick. Therefore, if the cosmetic product being displayed in a certain label video frame is a foundation, the foundation can be used as the label corresponding to the label video frame, and the foundation can be used as the video trigger mark of the label video frame; furthermore, when the video trigger mark is the label trigger key, the user can quickly switch from the currently watched video node to a group of video clips where the label video frame corresponding to the label trigger key of the 'foundation' is located by clicking the label trigger key of the 'foundation' displayed on the video playing interface of the currently played makeup video. It will be appreciated that in this set of video clips, the makeup blogger takes a close-up of the applied foundation before taking a shot and then applies the foundation to the makeup blogger.
It is noted that, in the embodiment of the present application, extracting audio information located N seconds before and after a tag video frame, and extracting a tag corresponding to the tag video frame from the extracted audio information may be performed based on a must User Interface (dialog ui) open platform. The DUI open platform is a first full-link dialogue customizing platform which must be used for thinking, a built-in voice language skill store is arranged, and the provided services comprise dialogue functions based on the smart voice language technology which must be used for thinking and comprehensive services required by a developer when customizing a dialogue system, such as GUI customization, version management, private cloud deployment and the like, so that the developer can customize the dialogue interaction system according to requirements. The application scene is mainly played by a vehicle, a home, a robot, a story machine, a mobile phone assistant and the like. In a cibyz DUI development platform, aiming at the identification requirement of a video segmentation label, the customization skill aiming at the identification of the video segmentation label can be completed by developing a word bank, a semantic groove, an intention and the like; and then, the voice recognition module aiming at the video segmentation label recognition requirement can be completed by customizing the product, processing the exception and the like and adding the constructed special skills into the customized product. As shown in fig. 4, a schematic diagram of a development process of a speech recognition module provided in the embodiment of the present application is provided, wherein through setting a lexicon, each sub-label of a makeup video can be merged, and for example, any of a plurality of subclasses belonging to a certain makeup category is classified under the makeup category, and words such as "concealer", "foundation", "powder scattering", and "makeup fixation" belong to the makeup category of "makeup base", so that it is possible to recognize that any subclass returns a label of the makeup category; through the creation of intentions, no matter what expression is adopted in the process of voice recognition, the label of the keyword is returned as long as the keyword is encountered, and for example, the label of the "foundation" is returned as the result of the voice recognition in the two sentences of "firstly covering a bit deeper than skin color before foundation" and "then using PF foundation".
Further, each segmented makeup video may be merged according to the tag information, for example, a plurality of tags that may correspond to the same makeup category may be merged, and then the merged makeup category may be used as a tag of another form, and a video trigger mark using the makeup category as a tag is formed. When the makeup category is used as a label, the makeup category can cover a plurality of sub-labels, each sub-label can respectively correspond to a sub-video trigger mark, and the sub-label is the name of the makeup product. Therefore, when the video trigger mark is the label trigger key, after a user clicks the label trigger key which takes the makeup category as the label, a plurality of sub-label trigger keys which take the name of a makeup product as the sub-label can be displayed, and then the user can quickly jump to a group of video clips where the label video frames corresponding to the sub-labels are located from the currently watched video node by clicking one of the sub-label trigger keys, so that the user experience is improved.
For example, in the scene of makeup video, concealer, foundation make-up, powder, makeup fixation can be combined into a makeup base category, eyebrow powder, eyebrow pencil, eyebrow dye, eyebrow brush can be combined into a makeup eyebrow category, eye shadow, eyeliner can be combined into an eye makeup category, highlight and shadow can be combined into a cosmetic category, and lipstick, lip gloss, lipstick can be combined into a lip makeup category, and so on. Therefore, on a video playing interface of a makeup video being played by a user, a plurality of label trigger keys of characters such as 'make-up on the bottom', 'eyebrow makeup', 'eye makeup', 'face repair', 'lip makeup', and the like can be displayed; if the user clicks the label trigger key of the base makeup based on the actual needs of the user, the sub-labels under the label of the base makeup can be displayed, for example, the sub-labels can be 'concealer', 'foundation', 'loose powder', and 'fixed makeup', etc.; further, if the user clicks the sub-tag trigger key where the sub-tag of "foundation" is located based on the actual needs of the user, the user can quickly jump to a group of video clips where the tag video frame corresponding to the sub-tag trigger key of "foundation" is located from the video node currently being watched.
In some implementations of the present application, the setting, for each tagged video frame, a video trigger flag with a corresponding tag includes: classifying the labels to obtain various labels; setting a class video trigger mark for a tag video frame corresponding to the class tag, wherein the tag video frame corresponding to the class tag is determined according to a tag video frame corresponding to a tag positioned at the earliest playing position in the class tag; and setting a sub-video trigger mark for the label video frame corresponding to each label under the class label.
For the foregoing example, by combining the audio information corresponding to the tagged video frame in the video stream, a cosmetic video can be split, so as to form a segment of video clip with the tagged video frame as the playing start point. For any label video frame, the product display content is taken as the main body, so that the label corresponding to the label video frame can be further extracted from the extracted audio information, for example, the label can be the name of a cosmetic product which is introduced by a cosmetic blogger and is being displayed for a user.
In the embodiment of the application, after the makeup video is split, labels with names of various makeup products as the names can be formed, for example, labels such as concealer, foundation make-up, powder foundation, makeup powder, blush, eyebrow powder, eyebrow pencil, eyebrow dye, eye shadow, eyeliner, mascara brush, highlight, shadow, lipstick, lip gloss, lipstick and the like can be formed.
Next, considering that some cosmetic products may actually belong to the same makeup category, for example, concealer, foundation, powder scattering and makeup setting may be combined into a makeup base category, eyebrow powder, eyebrow pencil, eyebrow coloring cream and eyebrow brush may be combined into a makeup base category, eye shadow, eye liner and eye liner liquid may be combined into a makeup eye category, highlight and shadow may be combined into a makeup cosmetic category, lipstick, lip gloss and lipstick may be combined into a makeup lip category, and the like, and thus, each category label may be formed, for example, each category label may be "makeup base", "makeup eye", "makeup cosmetic", and "makeup lip".
Then, for any one of the class labels, a class video trigger flag corresponding to the class label may be set. When the class video trigger mark is represented in a key triggering manner, that is, when the class video trigger mark is a class label trigger key, by clicking the class label trigger key, the user can jump from the currently watching video node to a section of a set of video clip set segments formed by each label respectively, wherein the section of the set segment set segments corresponds to the class label, and a playing start point of the set segment is a label video frame corresponding to a label which is located below the class label and is originally played in the currently watching beauty makeup video.
Finally, since the class labels include a plurality of labels, in order to facilitate a user to quickly locate a certain label under a class label, sub-video trigger marks corresponding to the labels can be set under the class label. When the sub-video trigger mark is represented in a key triggering mode, namely when the sub-video trigger mark is a sub-label trigger key, a user can quickly jump from a currently watched video node to a group of video clips where a label video frame corresponding to the label is located by clicking a certain sub-label trigger key.
It is noted that, for the implementation of the above-mentioned class tag trigger key and sub-tag trigger key, the following may be implemented: if the user drags the mouse to the class label triggering key without clicking the class label triggering key, all labels under the class label can appear, and therefore after the user selects and clicks one of the labels, the user can jump to a group of video clips where the label video frame corresponding to the label is located; if the user drags the mouse to the trigger key of the label and clicks the trigger key, the user jumps to a set of video clip fragments which are respectively formed by the labels and correspond to a section of the label.
In the embodiment of the application, after the makeup video is split according to different makeup products, a user can selectively and quickly jump to a video clip to be learned according to own requirements and the difference of mastering degrees of makeup skills of different parts when watching the video. Meanwhile, related products can be recommended to the user through the makeup video playing interface according to different video clip labels, and the user can conveniently choose and purchase the products. Fig. 5 is a schematic view of a video playing interface provided in the embodiment of the present application. Referring to fig. 5, an application product is an intelligent cosmetic mirror, wherein an upper area in fig. 5 is a mirror surface of the intelligent cosmetic mirror, a lower area is a video playing interface of a cosmetic video, and it can be known from fig. 5 that a current playing state of the cosmetic video is a "pause" playing state, that is, a user can resume playing the cosmetic video by clicking a "resume" button located on the right side of the video playing interface at a later time; when a user watches a makeup video, if the user clicks a certain label trigger key positioned on the right side of a video playing interface (for example, the user can click 'eyebrow', 'blush', 'lip gloss' or 'eye line'), the playing of a video clip indicated by the label can be quickly switched from a current video watching node, meanwhile, the left side of the video playing interface gives corresponding product recommendations, the user can view, collect or purchase the corresponding product recommendations according to needs, and certainly, the related product recommendations can be closed by using a mode of 'closing recommendations' on the upper right corner; or, the fast switching to the scene of the video segment that the user wants to watch may be that the user has paused the playing of the makeup video first, and during the paused playing, the user may trigger a key by clicking a certain label located on the right side of the video playing interface, so that the user may jump directly from the paused node to the playing of the video segment indicated by the label.
Based on the same concept, embodiments of the present application also provide a display apparatus, as shown in fig. 6, which includes a display unit 601 and a controller 602; wherein:
a display unit 601 for displaying a video stream; the video stream comprises video trigger marks corresponding to all label video frames, and aiming at any first label video frame in all the label video frames, the first label video frame is used for indicating that contents displayed by video frames positioned between the first label video frame and the second label video frame correspond to the same type; the second tagged video frame is an adjacent tagged video frame of the first tagged video frame.
A controller 602 configured to: responding to user input, controlling the display unit to jump the video stream to a label video frame corresponding to a video trigger mark to start playing; the video trigger markers correspond one-to-one to the user inputs.
Further, for the display device, the display device is a cosmetic mirror.
As shown in fig. 7, a cosmetic mirror according to an embodiment of the present application is provided. In order to control the cosmetic mirror, a display unit 701 and a controller 702 are provided in the cosmetic mirror, wherein the display unit 701 is connected with the controller 702. In one possible embodiment, the display unit 701 in the cosmetic mirror may be a mirror-integrated display 701 (mirror display 701), the mirror-integrated display unit 701 is specifically a mirror material provided on an outer surface of a display panel, and when the display unit 701 is turned off, the cosmetic mirror may be similar to a general mirror and may be used to finish appearance; when the display unit 701 is lit, the display unit 701 may display content intelligently interacting with a user. If the user selects the half-screen display mode, the upper half of the display unit 701 of the makeup mirror may display content intelligently interacted with the user, the lower half of the display unit 701 may be similar to a common mirror and may be used to sort appearance, and the like, or the lower half of the display unit 701 of the makeup mirror may display content intelligently interacted with the user, and the upper half of the display unit 701 may be similar to a common mirror and may be used to sort appearance; if the user selects the full-screen display mode, the content intelligently interacted with the user can be displayed in the full screen mode.
Based on the same concept, an embodiment of the present application further provides a display method, as shown in fig. 8, the method includes:
step 801, receiving user input; the user input is used for indicating that the video stream is jumped to a label video frame corresponding to a video trigger mark to start playing, and the video trigger mark is in one-to-one correspondence with the user input; the video stream comprises video trigger marks corresponding to all label video frames, and aiming at any first label video frame in all the label video frames, the first label video frame is used for indicating that contents displayed by video frames positioned between the first label video frame and the second label video frame correspond to the same type; the second tag video frame is an adjacent tag video frame of the first tag video frame;
and step 802, playing the label video frame corresponding to the video trigger mark.
The embodiment of the present application provides a computing device, which may specifically be a desktop computer, a portable computer, a smart phone, a tablet computer, a Personal Digital Assistant (PDA), and the like. The computing device may include a Central Processing Unit (CPU), memory, input/output devices, etc., the input devices may include a keyboard, mouse, touch screen, etc., and the output devices may include a Display device, such as a Liquid Crystal Display (LCD), a Cathode Ray Tube (CRT), etc.
Memory, which may include Read Only Memory (ROM) and Random Access Memory (RAM), provides the processor with program instructions and data stored in the memory. In an embodiment of the present application, the memory may be configured to store program instructions of a video splitting method and a display method;
and the processor is used for calling the program instructions stored in the memory and executing the video splitting method and the display method according to the obtained program.
The embodiment of the application provides a computer-readable storage medium, which stores computer-executable instructions for enabling a computer to execute a video splitting method and a display method.
As will be appreciated by one of skill in the art, embodiments of the present application may be provided as a method, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A video splitting method is characterized by comprising the following steps:
determining each key video frame from a video stream, wherein a set difference degree requirement is met between any key video frame and a preamble video frame of the any key video frame;
determining each label video frame from each key video frame; for any first tag video frame in the tag video frames, the first tag video frame is used for indicating that the content shown by the video frame positioned between the first tag video frame and the second tag video frame corresponds to the same type; the second tag video frame is an adjacent tag video frame of the first tag video frame;
setting corresponding video trigger marks for each label video frame; and the video trigger mark is used for skipping the video stream to the label video frame corresponding to the video trigger mark to start playing according to the user input.
2. The method of claim 1,
the determining each key video frame from the video stream includes:
determining a first key video frame from the video stream;
for any video frame which is positioned in the video stream and is subsequent to the first key video frame, determining the similarity between the video frame and the first key video frame, and determining the video frame with the similarity meeting the requirement of the difference degree as a second key video frame; the video frame is any video frame from the first key video frame to a video frame adjacent to the first key video frame.
3. The method of claim 1,
the video stream is a makeup video;
the label video frame is a video frame with product display content.
4. The method of claim 3,
determining each tag video frame from each key video frame comprises:
carrying out face detection on each key video frame;
and determining each label video frame from the key video frames without the human faces through a preset model.
5. The method of claim 1, wherein the video trigger flag is a tag trigger key; and each label trigger key is displayed on a playing interface when the video stream is played.
6. The method of any one of claims 1 to 5,
the setting of the corresponding video trigger mark for each label video frame includes:
determining the label corresponding to each label video frame according to the audio information of the video stream;
and setting a video trigger mark with a corresponding label aiming at each label video frame.
7. The method of claim 6,
the setting of the video trigger mark with the corresponding label aiming at each label video frame comprises the following steps:
classifying the labels to obtain various labels;
setting a class video trigger mark for a tag video frame corresponding to the class tag, wherein the tag video frame corresponding to the class tag is determined according to a tag video frame corresponding to a tag positioned at the earliest playing position in the class tag;
and setting a sub-video trigger mark for the label video frame corresponding to each label under the class label.
8. A display device, comprising:
a display unit and a controller;
the display unit is used for displaying the video stream; the video stream comprises video trigger marks corresponding to all label video frames, and aiming at any first label video frame in all the label video frames, the first label video frame is used for indicating that contents displayed by video frames positioned between the first label video frame and the second label video frame correspond to the same type; the second tag video frame is an adjacent tag video frame of the first tag video frame;
the controller configured to: responding to user input, controlling the display unit to jump the video stream to a label video frame corresponding to a video trigger mark to start playing; the video trigger markers correspond one-to-one to the user inputs.
9. The display device of claim 8,
the display device is a cosmetic mirror.
10. A display method, comprising:
receiving a user input; the user input is used for indicating that the video stream is jumped to a label video frame corresponding to a video trigger mark to start playing, and the video trigger mark is in one-to-one correspondence with the user input; the video stream comprises video trigger marks corresponding to all label video frames, and aiming at any first label video frame in all the label video frames, the first label video frame is used for indicating that contents displayed by video frames positioned between the first label video frame and the second label video frame correspond to the same type; the second tag video frame is an adjacent tag video frame of the first tag video frame;
and playing the label video frame corresponding to the video trigger mark.
CN202110299096.6A 2021-03-20 2021-03-20 Video splitting method, display device and display method Pending CN115119062A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110299096.6A CN115119062A (en) 2021-03-20 2021-03-20 Video splitting method, display device and display method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110299096.6A CN115119062A (en) 2021-03-20 2021-03-20 Video splitting method, display device and display method

Publications (1)

Publication Number Publication Date
CN115119062A true CN115119062A (en) 2022-09-27

Family

ID=83323347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110299096.6A Pending CN115119062A (en) 2021-03-20 2021-03-20 Video splitting method, display device and display method

Country Status (1)

Country Link
CN (1) CN115119062A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105677735A (en) * 2015-12-30 2016-06-15 腾讯科技(深圳)有限公司 Video search method and apparatus
CN109446365A (en) * 2018-08-30 2019-03-08 新我科技(广州)有限公司 A kind of intelligent cosmetic exchange method and storage medium
CN210158204U (en) * 2019-06-27 2020-03-20 京东方科技集团股份有限公司 Cosmetic mirror
CN111541912A (en) * 2020-04-30 2020-08-14 北京奇艺世纪科技有限公司 Video splitting method and device, electronic equipment and storage medium
CN112004163A (en) * 2020-08-31 2020-11-27 北京市商汤科技开发有限公司 Video generation method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105677735A (en) * 2015-12-30 2016-06-15 腾讯科技(深圳)有限公司 Video search method and apparatus
CN109446365A (en) * 2018-08-30 2019-03-08 新我科技(广州)有限公司 A kind of intelligent cosmetic exchange method and storage medium
CN210158204U (en) * 2019-06-27 2020-03-20 京东方科技集团股份有限公司 Cosmetic mirror
CN111541912A (en) * 2020-04-30 2020-08-14 北京奇艺世纪科技有限公司 Video splitting method and device, electronic equipment and storage medium
CN112004163A (en) * 2020-08-31 2020-11-27 北京市商汤科技开发有限公司 Video generation method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
武法提: "《网络教育应用》", 高考教育出版社, pages: 205 - 206 *

Similar Documents

Publication Publication Date Title
CN109688463B (en) Clip video generation method and device, terminal equipment and storage medium
US20220110435A1 (en) Makeup processing method and apparatus, electronic device, and storage medium
CN103488764B (en) Individualized video content recommendation method and system
CN110825901A (en) Image-text matching method, device and equipment based on artificial intelligence and storage medium
US20180121733A1 (en) Reducing computational overhead via predictions of subjective quality of automated image sequence processing
CN114390217B (en) Video synthesis method, device, computer equipment and storage medium
Xu et al. Hierarchical affective content analysis in arousal and valence dimensions
JP2011215963A (en) Electronic apparatus, image processing method, and program
CN110929158A (en) Content recommendation method, system, storage medium and terminal equipment
CN105721936A (en) Intelligent TV program recommendation system based on context awareness
CN116127054B (en) Image processing method, apparatus, device, storage medium, and computer program
US20240153395A1 (en) Tracking concepts and presenting content in a learning system
CN113343720A (en) Subtitle translation method and device for subtitle translation
Fei et al. Creating memorable video summaries that satisfy the user’s intention for taking the videos
US11653071B2 (en) Responsive video content alteration
CN113722542A (en) Video recommendation method and display device
CN112083863A (en) Image processing method and device, electronic equipment and readable storage medium
CN115119062A (en) Video splitting method, display device and display method
CN116910302A (en) Multi-mode video content effectiveness feedback visual analysis method and system
CN110019862B (en) Courseware recommendation method, device, equipment and storage medium
KR20240077627A (en) User emotion interaction method and system for extended reality based on non-verbal elements
CN114840711A (en) Intelligent device and theme construction method
CN113438532B (en) Video processing method, video playing method, video processing device, video playing device, electronic equipment and storage medium
CN115309487A (en) Display method, display device, electronic equipment and readable storage medium
CN114038034A (en) Virtual face selection model training method, online video psychological consultation privacy protection method, storage medium and psychological consultation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220927