CN112203109A - Video information output method based on feature recognition and intelligent terminal - Google Patents

Video information output method based on feature recognition and intelligent terminal Download PDF

Info

Publication number
CN112203109A
CN112203109A CN202011188315.5A CN202011188315A CN112203109A CN 112203109 A CN112203109 A CN 112203109A CN 202011188315 A CN202011188315 A CN 202011188315A CN 112203109 A CN112203109 A CN 112203109A
Authority
CN
China
Prior art keywords
information
feature
information output
identification result
application program
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011188315.5A
Other languages
Chinese (zh)
Inventor
周国霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202011188315.5A priority Critical patent/CN112203109A/en
Publication of CN112203109A publication Critical patent/CN112203109A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software
    • H04N21/8173End-user applications, e.g. Web browser, game

Abstract

The invention provides a video information output method based on feature recognition and an intelligent terminal, which can simultaneously establish a first information output thread and a second information output thread which are parallel, and then synchronizing the video information to the first information output thread and the second information output thread, when the user switches between the first application program and the second application program through the intelligent terminal, can identify the collected characteristic information of the user and judge whether to trigger a pull-back signal, when the pull-back signal is triggered, if the user switches back the first application program, the video information is output from the moment when the user switches the first application program for the last time based on the second information output thread, therefore, the progress bar is not required to be manually pulled back when the user switches back the first application program, so that the video playback is more convenient and trouble-saving on the one hand, and the normal operation of the first information output thread cannot be influenced on the other hand.

Description

Video information output method based on feature recognition and intelligent terminal
Technical Field
The invention relates to the technical field of live video, in particular to a video information output method based on feature recognition and an intelligent terminal.
Background
With the development of science and technology, intelligent terminals, such as mobile phones, tablets or notebook computers, have increasingly powerful functions. Today's intelligent terminals are capable of supporting real-time output of various information (video information, voice information, etc.). Taking live video as an example, the live broadcast technology at present develops rapidly, and people can watch live broadcast videos of various large live broadcast platforms through intelligent terminals at any time and any place.
However, when the live video is watched through the intelligent terminal, if the intelligent terminal needs to process other services, the output of the live video is interrupted, and when the intelligent terminal finishes processing other services and continues to output the live video, the video in a period before the live video is missed. The existing solution to the above problem is to set the live video to a playable mode, but this mode still requires the user to pull back the progress bar of the live video.
Disclosure of Invention
In order to solve the problems, the invention provides a video information output method based on feature recognition and an intelligent terminal.
In a first aspect of the embodiments of the present invention, a video information output method based on feature recognition is provided, and is applied to an intelligent terminal, where the intelligent terminal communicates with a server, and a first application program and a second application program are installed in the intelligent terminal, and the method includes:
when video information acquired from a server is imported into a first information output thread established through a first application program and the video information is output based on the first information output thread, establishing a second information output thread parallel to the first information output thread through the first application program, and synchronizing the video information into the second information output thread when the video information is output based on the first information output thread;
detecting whether a first application program switching instruction input by a user is received in real time, and cutting out the first application program to run the second application program when the first application program switching instruction is detected; acquiring a cut-out time of the first application program and a video frame corresponding to the video information at the cut-out time;
when the second application program is operated, collecting the characteristic information of the user through the second application program, and identifying the characteristic information to obtain an identification result;
and judging whether a pull-back signal aiming at the video information is triggered or not based on the identification result, if the pull-back signal aiming at the video information is triggered, cutting out a second application program and running the first application program when a second application program switching instruction input by the user is detected, determining a target frame corresponding to the video frame from the second information output thread, and continuously outputting the video information synchronized in the second information output thread from the target frame based on the second information output thread.
Optionally, the identifying the feature information to obtain an identification result includes:
detecting whether acquisition equipment integrated in the intelligent terminal sends a detection signal or not; when the acquisition equipment is detected to send the detection signal, determining the category of the characteristic information acquired by the acquisition equipment through the type identifier of the acquisition equipment;
distributing a first feature identifier for the feature information according to the category of the feature information, wherein the first feature identifier is used for distinguishing the feature categories; determining a target feature conversion list in a preset database according to the first feature identification, wherein the preset database stores a plurality of feature conversion lists, each feature conversion is provided with a second feature identification, the second feature identification of the target feature conversion list is consistent with the first feature identification, and the feature conversion list stores the corresponding relation between preset feature information and feature vectors;
searching first preset feature information corresponding to the feature information in the target feature conversion list and determining a target feature vector corresponding to the feature information based on a corresponding relation corresponding to the first preset feature information included in the target feature conversion list;
inputting the target characteristic vector into a pre-built convolutional neural network, identifying the target characteristic vector based on the convolutional neural network, and outputting an evaluation label corresponding to the target characteristic vector;
and determining the time weight of the evaluation label, and determining the identification result corresponding to the characteristic information based on the evaluation label and the time weight thereof.
Optionally, the determining, based on the evaluation label and the time weight thereof, an identification result corresponding to the feature information includes:
when the feature information is of multiple types, determining the evaluation factors of the evaluation label of each type of feature information, wherein the evaluation factors comprise a first evaluation factor when the evaluation label is a first evaluation label, a second evaluation factor when the evaluation label is a second evaluation label and an evaluation factor when the evaluation label is a third evaluation label;
weighting the first evaluation factor of each type of feature information according to the time weight of each type of feature information to obtain a second evaluation factor;
determining a first proportion of the second evaluation factors in the first numerical range and a second proportion of the second evaluation factors in the second numerical range in all the second evaluation factors; the first numerical interval consists of a first endpoint and a second endpoint, the second numerical interval consists of the second endpoint and a third endpoint, the first endpoint is smaller than the second endpoint, the second endpoint is smaller than the third endpoint, the first numerical interval does not include the second endpoint, and the second numerical interval does not include the second endpoint;
and determining the identification result corresponding to the multi-class characteristic information according to the first ratio and the second ratio.
Optionally, the determining, according to the first percentage and the second percentage, an identification result corresponding to the multi-class feature information includes:
when the first occupation ratio is larger than the second occupation ratio, determining an identification result corresponding to the multi-class characteristic information as a first identification result, wherein the first identification result is used for representing and triggering the pull-back signal;
when the first occupation ratio is smaller than the second occupation ratio, determining an identification result corresponding to the multi-class characteristic information as a second identification result, wherein the second identification result is used for representing that the pull-back signal is not triggered;
when the first ratio is equal to the second ratio, determining a first mean value of a second evaluation factor corresponding to the first ratio and a second mean value of a second evaluation factor corresponding to the second ratio, determining that an identification result corresponding to multiple types of feature information is the first identification result when the first mean value is larger than the second mean value, determining that the identification result corresponding to the multiple types of feature information is the second identification result when the first mean value is smaller than the second mean value, determining that the identification result corresponding to the multiple types of feature information is a third identification result when the first mean value is equal to the second mean value, wherein the third identification result is used for indicating the intelligent terminal to output prompt information, and the prompt information is used for prompting whether a user manually triggers a pull-back signal when the user switches back to the first application program.
Optionally, the method further comprises:
and if the identification result is the second identification result, outputting from a current frame of the video information based on the first information output thread, wherein the current frame is positioned behind the video frame.
Optionally, the method further comprises:
if the identification result is the third identification result, outputting the prompt information when the second application program is cut out and the first application program is operated, if a first signal fed back by the user based on the prompt information is received, determining a target frame corresponding to the video frame from the second information output thread and continuously outputting the video information synchronized in the second information output thread from the target frame based on the second information output thread; and if the first signal fed back by the user based on the prompt information is not received, outputting the current frame of the video information based on the first information output thread, wherein the current frame is positioned behind the video frame.
Optionally, after the step of synchronizing the video information into the second information output thread while outputting the video information based on the first information output thread, the method further comprises:
and deleting the corresponding information in the second information output thread according to the partial information output by the first information output thread.
In a second aspect of the embodiments of the present invention, there is provided a video information output apparatus based on feature recognition, including:
the synchronization module is used for establishing a second information output thread parallel to a first information output thread through a first application program when video information acquired from a server is imported into the first information output thread established through the first application program and the video information is output based on the first information output thread, and synchronizing the video information into the second information output thread when the video information is output based on the first information output thread;
the switching module is used for detecting whether a first application program switching instruction input by a user is received in real time, and cutting out the first application program to run the second application program when the first application program switching instruction is detected; acquiring a cut-out time of the first application program and a video frame corresponding to the video information at the cut-out time;
the identification module is used for acquiring the characteristic information of the user through the second application program when the second application program is operated, and identifying the characteristic information to obtain an identification result;
and the output module is used for judging whether a pull-back signal aiming at the video information is triggered or not based on the identification result, cutting out the second application program and running the first application program if the pull-back signal aiming at the video information is triggered according to a second application program switching instruction input by the user, determining a target frame corresponding to the video frame from the second information output thread, and continuously outputting the video information synchronized in the second information output thread from the target frame based on the second information output thread.
In a third aspect of the embodiments of the present invention, an intelligent terminal is provided, including: a processor and a memory and bus connected to the processor; the processor and the memory are communicated with each other through the bus; the processor is used for calling the computer program in the memory to execute the video information output method based on the feature recognition.
In a fourth aspect of the embodiments of the present invention, there is provided a readable storage medium, on which a program is stored, the program, when being executed by a processor, implementing the video information output method based on feature recognition described above.
The video information output method and the intelligent terminal based on the feature recognition can establish a first information output thread and a second information output thread which are parallel, and then synchronizing the video information to the first information output thread and the second information output thread, when the user switches between the first application program and the second application program through the intelligent terminal, can identify the collected characteristic information of the user and judge whether to trigger a pull-back signal, when the pull-back signal is triggered, if the user switches back the first application program, the video information is output from the moment when the user switches the first application program for the last time based on the second information output thread, therefore, the progress bar is not required to be manually pulled back when the user switches back the first application program, so that the video playback is more convenient and trouble-saving on the one hand, and the normal operation of the first information output thread cannot be influenced on the other hand.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flowchart of a video information output method based on feature recognition according to an embodiment of the present invention.
Fig. 2 is a functional block diagram of a video information output apparatus based on feature recognition according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of product modules of an intelligent terminal according to an embodiment of the present invention.
Icon:
200-an intelligent terminal; 201-video information output device based on feature recognition; 2011-synchronization module; 2012-a switching module; 2013-an identification module; 2014-an output module; 211-a processor; 212-a memory; 213-bus.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In order to better understand the technical solutions of the present invention, the following detailed descriptions of the technical solutions of the present invention are provided with the accompanying drawings and the specific embodiments, and it should be understood that the specific features in the embodiments and the examples of the present invention are the detailed descriptions of the technical solutions of the present invention, and are not limitations of the technical solutions of the present invention, and the technical features in the embodiments and the examples of the present invention may be combined with each other without conflict.
In order to solve the problem that a user still needs to manually pull back a progress bar to return to a breakpoint moment and continue to output a live video when a breakpoint of a live video is output by a common intelligent terminal, the embodiment of the invention provides a video information output method based on feature recognition and an intelligent terminal.
Referring to fig. 1, a flowchart of a video information output method based on feature recognition according to an embodiment of the present invention is shown, where the method is applicable to an intelligent terminal, the intelligent terminal is in communication with a server, and a first application program and a second application program are installed in the intelligent terminal.
In this embodiment, the first application may be a live APP, and the second application is different from the first application, for example, the second application may be chat software. The intelligent terminal obtains the video information from the server and outputs the video information through the first application program, for example, the video information may be video information.
On the basis of the above, the method may specifically include the following.
Step S21, when importing the video information obtained from the server into a first information output thread established by a first application and outputting the video information based on the first information output thread, establishing a second information output thread parallel to the first information output thread by the first application, and synchronizing the video information into the second information output thread when outputting the video information based on the first information output thread.
In this embodiment, a description is given by taking the first application as a live APP and the video information as an example. The intelligent terminal obtains video information from the server, guides the video information into the first information output thread for output, and synchronizes the video information into the second information output thread.
Step S22, detecting whether a first application program switching instruction input by a user is received in real time, and cutting out the first application program to run the second application program when the first application program switching instruction is detected; and acquiring the cut-out time of the first application program and the video frame corresponding to the video information at the cut-out time.
In this embodiment, if the user receives a message sent by the second application (chat software) while watching the live broadcast through the first application, the user may input an application switching instruction through the intelligent terminal. The first application switching instruction may be a touch instruction or a key instruction, and is used to switch the running first application to the second application.
In the present embodiment, the cut-out time is described by taking t1 as an example, and it is understood that the video frame of the video information (video information) at time t1 is f 1.
Step S23, when the second application program is running, acquiring feature information of the user through the second application program, and identifying the feature information to obtain an identification result.
In this embodiment, the feature information may be biometric information of the user, such as voice information and face image information, or may be text information input by the user based on the second application program, and is not limited herein.
Step S24, determining whether to trigger a pull-back signal for the video information based on the recognition result, if the pull-back signal for the video information is triggered, cutting out the second application and running the first application when detecting a second application switching instruction input by the user, determining a target frame corresponding to the video frame from the second information output thread, and continuing to output the video information synchronized in the second information output thread from the target frame based on the second information output thread.
In this embodiment, the pull-back signal is used to indicate that there is a desire of the user to continue watching video information from a video frame, for example, the user switches out the first application program at time t1, after completing related operations through the second application program, switches back to the first application program at time t2, and if the pull-back signal is triggered, the smart terminal plays live video from time t1 based on the second information output thread.
Therefore, the progress bar is not required to be manually pulled back when the user switches back the first application program, so that the video playback is more convenient and trouble-saving on the one hand, and the normal operation of the first information output thread cannot be influenced on the other hand.
It can be understood that, through steps S21-S24, the first information output thread and the second information output thread can be established in parallel, and then video information is synchronized to the first information output thread and the second information output thread, when a user switches between the first application program and the second application program through the intelligent terminal, the collected characteristic information of the user can be identified and whether a pull-back signal is triggered can be judged, if the user switches back to the first application program when the pull-back signal is triggered, the video information can be output based on the second information output thread from the moment when the user switches back to the first application program last time, so that the user does not need to manually pull back a progress bar when switching back to the first application program, on one hand, video playback is more convenient and trouble-saving, and on the other hand, normal operation of the first information output thread is not affected.
In a specific implementation, the method for outputting video information from a target frame based on the second information output thread can be understood as a video playback function set based on the first application program. When the video continuous playing function is realized, in order not to influence the normal output of the live video, the improvement can be carried out through the parallel first information output thread and the second information output thread.
However, in practical applications, how to establish two information output threads in parallel to ensure synchronization of video information in the two information output threads and effectively reduce the operation load of the intelligent terminal is a key for implementing the video playback function. For this purpose, in step S21, the establishing, by the first application program, a second information output thread in parallel with the first information output thread may specifically include the following.
Step S211, determining, according to a first memory resource occupied by the first application in the intelligent terminal, a decompression path for decompressing the application compression packet corresponding to the first application in the intelligent terminal, where the decompression path is a path from a storage location of the decompression program compression packet in the intelligent terminal to a cache location where the first application runs in the intelligent terminal.
Step S212, judging whether path nodes meeting preset conditions exist in the decompression path; the path node is a storage space in the intelligent terminal, and the preset condition is that node weighted values in all path nodes forming the decompression path fall into a set numerical value interval; the node weight values are used for representing the delay influence of the path nodes in the decompression path on the decompression path when the path nodes are damaged, and the node weight values falling into the set numerical value interval are used for representing the delay influence of the corresponding path nodes on the decompression path when the path nodes are damaged.
Step S213, if the first information output thread exists, determining to build a log file of the first information output thread, wherein the log file comprises an operation record for building the first information output thread; and extracting a first operation aiming at the path node meeting the preset condition from the operation record of the log file, and analyzing the first operation to obtain a script file, wherein the script file is an execution file for building the first information output thread through the first application program.
Step S214, establishing a second memory resource of the intelligent terminal occupied by the video information output thread with the delay difference value smaller than a preset value with the first information output thread in the first application program; determining whether the intelligent terminal is in an overload state when a video information output thread is established according to the first memory resource and the second memory resource; if so, extracting an object file from the script file, wherein the object file is used for establishing a channel for importing the object program into the first information output thread when running; and compressing the script file according to the target file, determining a third memory resource of the intelligent terminal occupied by a video information output thread with a delay difference value between the built third memory resource and the first information output thread being smaller than a preset value based on the compressed script file, and running the compressed script file to build the video information output thread and determine the second information output thread according to the video information output thread under the condition that the intelligent terminal is not in an overload state when the video information output thread is built according to the first memory resource and the second memory resource.
It can be understood that, based on steps S211 to S214, the script file can be compressed based on the memory resource in the script file used for establishing the object file for importing the object program into the object file of the first information output thread and the intelligent terminal occupied when the information output thread is established, and then the second information output thread is established based on the compressed script file, so that the operation load of the intelligent terminal can be effectively reduced on the premise of ensuring the synchronization of the video information in the first information output thread and the second information output thread, and the intelligent terminal is prevented from being in an overload state.
In the implementation, the accurate determination of the recognition result is the key to ensure the accuracy of the judgment of the pull-back signal, and for this reason, when determining the recognition result, not only the category of the feature information but also the time weight for recognizing the feature information need to be considered, and for this reason, in step S23, the recognition result obtained by recognizing the feature information may specifically include the following.
Step S231, detecting whether the acquisition equipment integrated in the intelligent terminal sends a detection signal; and when the acquisition equipment is detected to send the detection signal, determining the category of the characteristic information acquired by the acquisition equipment through the type identifier of the acquisition equipment.
In this embodiment, the capturing devices include, but are not limited to, a microphone, a front-facing camera, and a text collector. The detection signal is sent to the intelligent terminal when the acquisition equipment acquires the corresponding characteristic information.
Step S232, distributing a first feature identifier for the feature information according to the category of the feature information, wherein the first feature identifier is used for distinguishing the feature category; and determining a target feature conversion list in a preset database according to the first feature identification, wherein the preset database stores a plurality of feature conversion lists, each feature conversion is provided with a second feature identification, the second feature identification of the target feature conversion list is consistent with the first feature identification, and the feature conversion list stores the corresponding relation between preset feature information and feature vectors.
Step S233, finding out first preset feature information corresponding to the feature information in the target feature conversion list, and determining a target feature vector corresponding to the feature information based on a corresponding relationship corresponding to the first preset feature information included in the target feature conversion list.
In this embodiment, the target feature vector may be used as a prediction set of the input convolutional neural network.
Step S234, inputting the target characteristic vector into a pre-built convolutional neural network, identifying the target characteristic vector based on the convolutional neural network, and outputting an evaluation label corresponding to the target characteristic vector.
In this embodiment, the convolutional neural network is configured to identify a user intention corresponding to the feature vector, where the user intention may be represented by an evaluation tag, and further, the evaluation tag may include a first evaluation tag, a second evaluation tag, and a third evaluation tag. The first evaluation label is used for representing that the attention of the user to the video information is excellent, the second evaluation label is used for representing that the attention of the user to the video information is normal, and the third evaluation label is used for representing that the attention of the user to the video information is poor.
Step S235, determining a time weight of the evaluation label, and determining an identification result corresponding to the feature information based on the evaluation label and the time weight thereof.
In this embodiment, the time weight is used to represent the duration characteristic corresponding to the characteristic information, for example, the shorter the duration used by the user when outputting the characteristic information, the larger the time weight is, which indicates that the user wants to quickly complete the corresponding operation through the second application program and then continue to watch the live video of the first application program.
It is understood that, according to steps S231 to S235, the category and the time weight of the feature information can be taken into consideration, thereby ensuring the accuracy and reliability of the recognition result.
In a specific implementation, there may be many types of collected feature information, and there may be mutual exclusion of evaluation tags between the multiple types of feature information, in this case, in order to ensure accuracy of the identification result, in step S234, the determining the identification result corresponding to the feature information based on the evaluation tags and the time weights thereof may specifically include the following.
Step S2341, when the feature information is of multiple types, determining an evaluation factor of an evaluation label of each type of feature information, where the evaluation factor includes a first evaluation factor when the evaluation label is a first evaluation label, a second evaluation factor when the evaluation label is a second evaluation label, and an evaluation factor when the evaluation label is a third evaluation label.
Step S2342, weighting the first evaluation factor of each type of feature information according to the time weight of each type of feature information to obtain a second evaluation factor.
Step 2343, determining a first ratio of the second evaluation factors located in the first numerical range and a second ratio of the second evaluation factors located in the second numerical range from among all the second evaluation factors; the first numerical interval is composed of a first endpoint and a second endpoint, the second numerical interval is composed of the second endpoint and a third endpoint, the first endpoint is smaller than the second endpoint, the second endpoint is smaller than the third endpoint, the first numerical interval does not include the second endpoint, and the second numerical interval does not include the second endpoint.
In this embodiment, the value range of the second evaluation factor may be 0 to 1. Accordingly, if the first endpoint is 0, the second endpoint is 0.5, and the third endpoint is 1, the first value interval may be [0, 0.5 ], and the second value interval may be [0.5, 1 ]. The first ratio may be x1 and the second ratio may be x 2.
Step S2344, determining an identification result corresponding to the multi-class feature information according to the first percentage and the second percentage.
In this embodiment, the identification result corresponding to the multiple types of feature information may be determined based on the comparison result of the first ratio and the second ratio, so that the evaluation tag mutual exclusion condition between the multiple types of feature information may be taken into consideration, and further, each type of feature information may be analyzed, and the accuracy of the identification result may be ensured.
On the basis, the identification result corresponding to the multi-class feature information is determined according to the first ratio and the second ratio, which may specifically include the following several cases.
Firstly, when the first ratio x1 is greater than the second ratio x2, the identification result corresponding to the multi-class feature information is determined to be a first identification result, and the first identification result is used for representing that the pull-back signal is triggered.
Secondly, when the first occupation ratio x1 is smaller than the second occupation ratio x2, the identification result corresponding to the multi-class feature information is determined to be a second identification result, and the second identification result is used for representing that the pull-back signal is not triggered.
Thirdly, when the first ratio x1 is equal to the second ratio x2, determining a first mean value d1 of a second evaluation factor p1 corresponding to the first ratio x1 and a second mean value d2 of a second evaluation factor p2 corresponding to the second ratio x2, determining the recognition result corresponding to the multi-class feature information as the first recognition result when the first mean value d1 is greater than the second mean value d2, determining the recognition result corresponding to the multi-class feature information as the second recognition result when the first mean value d1 is smaller than the second mean value d2, determining the recognition result corresponding to the multi-class feature information as a third recognition result when the first mean value d1 is equal to the second mean value d2, and the third recognition result is used for indicating the intelligent terminal to output prompt information, and the prompt information is used for prompting whether a pull-back signal is manually triggered when the user switches back the first application program.
Based on the above, on the basis of the steps S21-S24, the method may further include the following:
and if the identification result is the second identification result, outputting from a current frame of the video information based on the first information output thread, wherein the current frame is positioned behind the video frame.
If the identification result is the third identification result, outputting the prompt information when the second application program is cut out and the first application program is operated, if a first signal fed back by the user based on the prompt information is received, determining a target frame corresponding to the video frame from the second information output thread and continuously outputting the video information synchronized in the second information output thread from the target frame based on the second information output thread; and if the first signal fed back by the user based on the prompt information is not received, outputting the current frame of the video information based on the first information output thread, wherein the current frame is positioned behind the video frame.
It can be understood that through the content, three different recognition results can be determined based on the multi-type characteristic information, and then different operations are executed when the user switches back to the first application program, so that the flexibility and the accuracy of continuous playing of the live video are improved.
On the basis, since the video information is synchronized between the first information output thread and the second information output thread, when the intelligent terminal runs the first application program, two groups of video information are cached, which occupies more storage space of the intelligent terminal, and in order to improve the problem, after the step of synchronizing the video information into the second information output thread when the video information is output based on the first information output thread, the method may further include the following content.
And deleting the corresponding information in the second information output thread according to the partial information output by the first information output thread. For example, if the first information output thread starts to output live video continuously at time T1, the intelligent terminal may delete the corresponding video synchronized with the second information output thread according to the video output by the first information output thread in real time. Therefore, the occupation condition of the storage space of the intelligent terminal can be effectively improved.
On the basis of the above, please refer to fig. 2, which is a block diagram of a video information output apparatus 201 based on feature recognition according to an embodiment of the present invention, the video information output apparatus 201 based on feature recognition may include the following modules.
A synchronization module 2011, configured to, when the video information obtained from the server is imported into a first information output thread established by a first application and the video information is output based on the first information output thread, establish, by the first application, a second information output thread parallel to the first information output thread, and synchronize the video information into the second information output thread when the video information is output based on the first information output thread.
A switching module 2012, configured to detect whether a first application switching instruction input by a user is received in real time, and when the first application switching instruction is detected, switch out the first application to run the second application; and acquiring the cut-out time of the first application program and the video frame corresponding to the video information at the cut-out time.
The identification module 2013 is configured to, when the second application program is running, acquire feature information of the user through the second application program, and identify the feature information to obtain an identification result.
And an output module 2014, configured to determine whether to trigger a pull-back signal for the video information based on the identification result, if the pull-back signal for the video information is triggered, cut out the second application and run the first application according to the detection of the second application switching instruction input by the user, determine, in the second information output thread, a target frame corresponding to the video frame, and continue to output the video information synchronized in the chronological second information output thread based on the second information output thread from the target frame.
In an alternative embodiment, the identifying module 2013 is specifically configured to:
detecting whether acquisition equipment integrated in the intelligent terminal sends a detection signal or not; when the acquisition equipment is detected to send the detection signal, determining the category of the characteristic information acquired by the acquisition equipment through the type identifier of the acquisition equipment;
distributing a first feature identifier for the feature information according to the category of the feature information, wherein the first feature identifier is used for distinguishing the feature categories; determining a target feature conversion list in a preset database according to the first feature identification, wherein the preset database stores a plurality of feature conversion lists, each feature conversion is provided with a second feature identification, the second feature identification of the target feature conversion list is consistent with the first feature identification, and the feature conversion list stores the corresponding relation between preset feature information and feature vectors;
searching first preset feature information corresponding to the feature information in the target feature conversion list and determining a target feature vector corresponding to the feature information based on a corresponding relation corresponding to the first preset feature information included in the target feature conversion list;
inputting the target characteristic vector into a pre-built convolutional neural network, identifying the target characteristic vector based on the convolutional neural network, and outputting an evaluation label corresponding to the target characteristic vector;
and determining the time weight of the evaluation label, and determining the identification result corresponding to the characteristic information based on the evaluation label and the time weight thereof.
In an alternative embodiment, the identifying module 2013 is specifically configured to:
when the feature information is of multiple types, determining the evaluation factors of the evaluation label of each type of feature information, wherein the evaluation factors comprise a first evaluation factor when the evaluation label is a first evaluation label, a second evaluation factor when the evaluation label is a second evaluation label and an evaluation factor when the evaluation label is a third evaluation label;
weighting the first evaluation factor of each type of feature information according to the time weight of each type of feature information to obtain a second evaluation factor;
determining a first proportion of the second evaluation factors in the first numerical range and a second proportion of the second evaluation factors in the second numerical range in all the second evaluation factors; the first numerical interval consists of a first endpoint and a second endpoint, the second numerical interval consists of the second endpoint and a third endpoint, the first endpoint is smaller than the second endpoint, the second endpoint is smaller than the third endpoint, the first numerical interval does not include the second endpoint, and the second numerical interval does not include the second endpoint;
and determining the identification result corresponding to the multi-class characteristic information according to the first ratio and the second ratio.
In an alternative embodiment, the identifying module 2013 is specifically configured to:
when the first occupation ratio is larger than the second occupation ratio, determining an identification result corresponding to the multi-class characteristic information as a first identification result, wherein the first identification result is used for representing and triggering the pull-back signal;
when the first occupation ratio is smaller than the second occupation ratio, determining an identification result corresponding to the multi-class characteristic information as a second identification result, wherein the second identification result is used for representing that the pull-back signal is not triggered;
when the first ratio is equal to the second ratio, determining a first mean value of a second evaluation factor corresponding to the first ratio and a second mean value of a second evaluation factor corresponding to the second ratio, determining that an identification result corresponding to multiple types of feature information is the first identification result when the first mean value is larger than the second mean value, determining that the identification result corresponding to the multiple types of feature information is the second identification result when the first mean value is smaller than the second mean value, determining that the identification result corresponding to the multiple types of feature information is a third identification result when the first mean value is equal to the second mean value, wherein the third identification result is used for indicating the intelligent terminal to output prompt information, and the prompt information is used for prompting whether a user manually triggers a pull-back signal when the user switches back to the first application program.
In an alternative embodiment, the output module 2014 is further configured to:
and if the identification result is the second identification result, outputting from a current frame of the video information based on the first information output thread, wherein the current frame is positioned behind the video frame.
In an alternative embodiment, the output module 2014 is further configured to:
if the identification result is the third identification result, outputting the prompt information when the second application program is cut out and the first application program is operated, if a first signal fed back by the user based on the prompt information is received, determining a target frame corresponding to the video frame from the second information output thread and continuously outputting the video information synchronized in the second information output thread from the target frame based on the second information output thread; and if the first signal fed back by the user based on the prompt information is not received, outputting the current frame of the video information based on the first information output thread, wherein the current frame is positioned behind the video frame.
In an alternative embodiment, the synchronization module 2011 is further configured to: and deleting the corresponding information in the second information output thread according to the partial information output by the first information output thread.
An embodiment of the present invention further provides a readable storage medium, on which a program is stored, and the program, when executed by a processor, implements the above-mentioned video information output method based on feature recognition.
The embodiment of the invention also provides a processor, wherein the processor is used for running the program, and the video information output method based on the feature recognition is executed when the program runs.
In this embodiment, as shown in fig. 3, the smart terminal 200 includes at least one processor 211, and at least one memory 212 and a bus 213 connected to the processor 211. The processor 211 and the memory 212 are in communication with each other via a bus 213. The processor 211 is configured to call program instructions in the memory 212 to execute the video information output method based on feature recognition.
In summary, the video information output method and the intelligent terminal based on feature recognition provided by the embodiments of the present invention can simultaneously establish a first information output thread and a second information output thread in parallel, and then synchronizing the video information to the first information output thread and the second information output thread, when the user switches between the first application program and the second application program through the intelligent terminal, can identify the collected characteristic information of the user and judge whether to trigger a pull-back signal, when the pull-back signal is triggered, if the user switches back the first application program, the video information is output from the moment when the user switches the first application program for the last time based on the second information output thread, therefore, the progress bar is not required to be manually pulled back when the user switches back the first application program, so that the video playback is more convenient and trouble-saving on the one hand, and the normal operation of the first information output thread cannot be influenced on the other hand.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing server to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing server, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a smart terminal includes one or more processors (CPUs), memory, and a bus. The intelligent terminal may also include input/output interfaces, network interfaces, and the like.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip. The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), random access memory with other feature weights (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage smart terminals, or any other non-transmission medium that can be used to store information that can be matched by a computing smart terminal. As defined herein, computer readable media does not include transitory computer readable media such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or intelligent terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or intelligent terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or intelligent terminal that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (8)

1. The utility model provides a video information output method based on feature recognition, its characterized in that is applied to intelligent terminal, intelligent terminal and server communication, install first application and second application in the intelligent terminal, wherein, first application is live APP, and the second application is chat software, the method includes:
when video information acquired from a server is imported into a first information output thread established through a first application program and the video information is output based on the first information output thread, establishing a second information output thread parallel to the first information output thread through the first application program, and synchronizing the video information into the second information output thread when the video information is output based on the first information output thread;
detecting whether a first application program switching instruction input by a user is received in real time, and cutting out the first application program to run the second application program when the first application program switching instruction is detected; acquiring a cut-out time of the first application program and a video frame corresponding to the video information at the cut-out time;
when the second application program is operated, collecting the characteristic information of the user through the second application program, and identifying the characteristic information to obtain an identification result;
and judging whether a pull-back signal aiming at the video information is triggered or not based on the identification result, if the pull-back signal aiming at the video information is triggered, cutting out a second application program and running the first application program when a second application program switching instruction input by the user is detected, determining a target frame corresponding to the video frame from the second information output thread, and continuously outputting the video information synchronized in the second information output thread from the target frame based on the second information output thread.
2. The method according to claim 1, wherein the identifying the feature information obtains an identification result, and comprises:
detecting whether acquisition equipment integrated in the intelligent terminal sends a detection signal or not; when the acquisition equipment is detected to send the detection signal, determining the category of the characteristic information acquired by the acquisition equipment through the type identifier of the acquisition equipment;
distributing a first feature identifier for the feature information according to the category of the feature information, wherein the first feature identifier is used for distinguishing the feature categories; determining a target feature conversion list in a preset database according to the first feature identification, wherein the preset database stores a plurality of feature conversion lists, each feature conversion is provided with a second feature identification, the second feature identification of the target feature conversion list is consistent with the first feature identification, and the feature conversion list stores the corresponding relation between preset feature information and feature vectors;
searching first preset feature information corresponding to the feature information in the target feature conversion list and determining a target feature vector corresponding to the feature information based on a corresponding relation corresponding to the first preset feature information included in the target feature conversion list;
inputting the target characteristic vector into a pre-built convolutional neural network, identifying the target characteristic vector based on the convolutional neural network, and outputting an evaluation label corresponding to the target characteristic vector;
and determining the time weight of the evaluation label, and determining the identification result corresponding to the characteristic information based on the evaluation label and the time weight thereof.
3. The method according to claim 2, wherein the determining the identification result corresponding to the feature information based on the evaluation label and the time weight thereof comprises:
when the feature information is of multiple types, determining the evaluation factors of the evaluation label of each type of feature information, wherein the evaluation factors comprise a first evaluation factor when the evaluation label is a first evaluation label, a second evaluation factor when the evaluation label is a second evaluation label and an evaluation factor when the evaluation label is a third evaluation label;
weighting the first evaluation factor of each type of feature information according to the time weight of each type of feature information to obtain a second evaluation factor;
determining a first proportion of the second evaluation factors in the first numerical range and a second proportion of the second evaluation factors in the second numerical range in all the second evaluation factors; the first numerical interval consists of a first endpoint and a second endpoint, the second numerical interval consists of the second endpoint and a third endpoint, the first endpoint is smaller than the second endpoint, the second endpoint is smaller than the third endpoint, the first numerical interval does not include the second endpoint, and the second numerical interval does not include the second endpoint;
and determining the identification result corresponding to the multi-class characteristic information according to the first ratio and the second ratio.
4. The method according to claim 3, wherein the determining, according to the first percentage and the second percentage, the recognition result corresponding to the multi-class feature information includes:
when the first occupation ratio is larger than the second occupation ratio, determining an identification result corresponding to the multi-class characteristic information as a first identification result, wherein the first identification result is used for representing and triggering the pull-back signal;
when the first occupation ratio is smaller than the second occupation ratio, determining an identification result corresponding to the multi-class characteristic information as a second identification result, wherein the second identification result is used for representing that the pull-back signal is not triggered;
when the first ratio is equal to the second ratio, determining a first mean value of a second evaluation factor corresponding to the first ratio and a second mean value of a second evaluation factor corresponding to the second ratio, determining that an identification result corresponding to multiple types of feature information is the first identification result when the first mean value is larger than the second mean value, determining that the identification result corresponding to the multiple types of feature information is the second identification result when the first mean value is smaller than the second mean value, determining that the identification result corresponding to the multiple types of feature information is a third identification result when the first mean value is equal to the second mean value, wherein the third identification result is used for indicating the intelligent terminal to output prompt information, and the prompt information is used for prompting whether a user manually triggers a pull-back signal when the user switches back to the first application program.
5. The method of claim 4, further comprising:
and if the identification result is the second identification result, outputting from a current frame of the video information based on the first information output thread, wherein the current frame is positioned behind the video frame.
6. The method of claim 4, further comprising:
if the identification result is the third identification result, outputting the prompt information when the second application program is cut out and the first application program is operated, if a first signal fed back by the user based on the prompt information is received, determining a target frame corresponding to the video frame from the second information output thread and continuously outputting the video information synchronized in the second information output thread from the target frame based on the second information output thread; and if the first signal fed back by the user based on the prompt information is not received, outputting the current frame of the video information based on the first information output thread, wherein the current frame is positioned behind the video frame.
7. The method according to any of claims 1-6, wherein after the step of synchronizing the video information into the second information output thread when outputting the video information based on the first information output thread, the method further comprises:
and deleting the corresponding information in the second information output thread according to the partial information output by the first information output thread.
8. An intelligent terminal, comprising: a processor and a memory and bus connected to the processor; the processor and the memory are communicated with each other through the bus; the processor is used for calling a computer program in the memory to execute the video information output method based on the feature recognition according to any one of the claims 1 to 7.
CN202011188315.5A 2020-02-26 2020-02-26 Video information output method based on feature recognition and intelligent terminal Withdrawn CN112203109A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011188315.5A CN112203109A (en) 2020-02-26 2020-02-26 Video information output method based on feature recognition and intelligent terminal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010118114.1A CN111343474B (en) 2020-02-26 2020-02-26 Information output method and device based on feature recognition and intelligent terminal
CN202011188315.5A CN112203109A (en) 2020-02-26 2020-02-26 Video information output method based on feature recognition and intelligent terminal

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010118114.1A Division CN111343474B (en) 2020-02-26 2020-02-26 Information output method and device based on feature recognition and intelligent terminal

Publications (1)

Publication Number Publication Date
CN112203109A true CN112203109A (en) 2021-01-08

Family

ID=71185593

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202011188315.5A Withdrawn CN112203109A (en) 2020-02-26 2020-02-26 Video information output method based on feature recognition and intelligent terminal
CN202011183652.5A Withdrawn CN112367529A (en) 2020-02-26 2020-02-26 Information output method based on feature recognition and intelligent terminal
CN202010118114.1A Active CN111343474B (en) 2020-02-26 2020-02-26 Information output method and device based on feature recognition and intelligent terminal

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202011183652.5A Withdrawn CN112367529A (en) 2020-02-26 2020-02-26 Information output method based on feature recognition and intelligent terminal
CN202010118114.1A Active CN111343474B (en) 2020-02-26 2020-02-26 Information output method and device based on feature recognition and intelligent terminal

Country Status (1)

Country Link
CN (3) CN112203109A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113577761A (en) * 2021-07-02 2021-11-02 深圳迭代如风网络科技有限公司 High-precision synchronous prediction rollback method based on certainty

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5471576A (en) * 1992-11-16 1995-11-28 International Business Machines Corporation Audio/video synchronization for application programs
US7500128B2 (en) * 2005-05-11 2009-03-03 Intel Corporation Mobile systems with seamless transition by activating second subsystem to continue operation of application executed by first subsystem as it enters into sleep mode
CN105892996A (en) * 2015-12-14 2016-08-24 乐视网信息技术(北京)股份有限公司 Assembly line work method and apparatus for batch data processing
CN106406998A (en) * 2016-09-28 2017-02-15 北京奇虎科技有限公司 Method and device for processing user interface
CN108055408B (en) * 2017-12-28 2019-12-24 维沃移动通信有限公司 Application program control method and mobile terminal
CN108415753A (en) * 2018-03-12 2018-08-17 广东欧珀移动通信有限公司 Method for displaying user interface, device and terminal

Also Published As

Publication number Publication date
CN111343474A (en) 2020-06-26
CN111343474B (en) 2020-11-17
CN112367529A (en) 2021-02-12

Similar Documents

Publication Publication Date Title
US9612791B2 (en) Method, system and storage medium for monitoring audio streaming media
CN113163272B (en) Video editing method, computer device and storage medium
US20140043480A1 (en) Video monitoring system and method
CN105989144B (en) Notification message management method, device and system and terminal equipment
CN111836102B (en) Video frame analysis method and device
CN111669577A (en) Hardware decoding detection method and device, electronic equipment and storage medium
WO2020135756A1 (en) Video segment extraction method, apparatus and device, and computer-readable storage medium
CN111343474B (en) Information output method and device based on feature recognition and intelligent terminal
US8712100B2 (en) Profiling activity through video surveillance
AU2018432003B2 (en) Video processing method and device, and terminal and storage medium
CN108170585A (en) log processing method, device, terminal device and storage medium
CN103986845A (en) Information processing method and information processing device
CN108572746B (en) Method, apparatus and computer readable storage medium for locating mobile device
CN107666398B (en) Group notification method, system and storage medium based on user behavior
KR20210040330A (en) Video clip extraction method and device
US10803861B2 (en) Method and apparatus for identifying information
CN116108150A (en) Intelligent question-answering method, device, system and electronic equipment
CN116055762A (en) Video synthesis method and device, electronic equipment and storage medium
CN113114986B (en) Early warning method based on picture and sound synchronization and related equipment
CN113099283B (en) Method for synchronizing monitoring picture and sound and related equipment
CN113438286B (en) Information pushing method and device, electronic equipment and storage medium
CN113205079B (en) Face detection method and device, electronic equipment and storage medium
CN113936655A (en) Voice broadcast processing method and device, computer equipment and storage medium
CN109523990B (en) Voice detection method and device
CN111818389B (en) Data processing method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210108