WO2023245846A1 - 移动终端真实内容解析、内置反诈和欺诈判断系统及方法 - Google Patents

移动终端真实内容解析、内置反诈和欺诈判断系统及方法 Download PDF

Info

Publication number
WO2023245846A1
WO2023245846A1 PCT/CN2022/113166 CN2022113166W WO2023245846A1 WO 2023245846 A1 WO2023245846 A1 WO 2023245846A1 CN 2022113166 W CN2022113166 W CN 2022113166W WO 2023245846 A1 WO2023245846 A1 WO 2023245846A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
recurrent neural
mobile terminal
video
total number
Prior art date
Application number
PCT/CN2022/113166
Other languages
English (en)
French (fr)
Inventor
喻荣先
Original Assignee
喻荣先
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202210702348.XA external-priority patent/CN114900712A/zh
Priority claimed from CN202210736338.8A external-priority patent/CN115022880A/zh
Application filed by 喻荣先 filed Critical 喻荣先
Publication of WO2023245846A1 publication Critical patent/WO2023245846A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs

Definitions

  • the present invention relates to the field of mobile terminals, and in particular, to a mobile terminal real content analysis system.
  • mobile terminals have moved from a "device-centered" model to a "people-centered” model, integrating embedded computing, control technology, artificial intelligence technology, and biometric authentication technology, fully embodying the people-centered approach. purpose. Due to the development of software technology, mobile terminals can adjust settings according to personal needs and become more personalized. At the same time, the mobile terminal itself integrates many software and hardware, and its functions are becoming more and more powerful.
  • the video clips received are usually fake and special effects videos with beautification effects added by various different beauty modes provided by the anchor live broadcast APP.
  • the video content can be increased
  • the entertainment and beauty of the video also deceives users who watch these video clips to a certain extent. At least some options for removing the beautification effect should be provided for users to choose when it is identified that the current video clip has a beautification mode.
  • the present invention provides a mobile terminal real content analysis system, which is difficult in the existing technology to identify whether to add beautification effects to fake or special effects videos on the mobile terminal where the user is located.
  • a customized video frame selection mechanism and a multi-frame joint identification mode are used to perform effective identification of beauty modes based on neural networks, thereby providing objective data and more options for users of mobile terminals.
  • a mobile terminal real content parsing system includes:
  • the video caching mechanism is installed in the mobile terminal running the anchor's live broadcast APP, and is used to receive the video clips sent from the live video server and cache the video clips as the current video clips;
  • a content selection mechanism connected to the video caching mechanism, is used to perform deduplication processing on each video frame that constitutes the current video segment to obtain a plurality of remaining frames;
  • a time-sharing processing mechanism is connected to the content selection mechanism, and is used to select a set total number of evenly timed remaining frames as the set total number according to multiple timestamps respectively corresponding to the plurality of remaining frames. Enter content;
  • a pattern identification device arranged in the mobile terminal and connected to the time-sharing processing mechanism, is used to input a set total number of input contents to the input end of the recurrent neural network and run the recurrent neural network to obtain the desired The beauty identification results output by the above-mentioned recurrent neural network;
  • a network generation device connected to the pattern identification device, used to generate the recurrent neural network required by the pattern identification device;
  • generating the recurrent neural network required by the pattern identification device includes: sending the recurrent neural network after performing each learning action to the pattern identification device to be used by the pattern identification device;
  • sending the recurrent neural network after each learning action is performed to the mode identification device for use by the mode identification device includes: the number of times the learning action is performed on the recurrent neural network and the various beauty modes that exist in the anchor's live broadcast APP.
  • the mean value of algorithm complexity is positively correlated;
  • selecting a set total number of evenly timed remaining pictures as a number of the set total number of remaining pictures includes: the value of the set total number and the host live broadcast
  • the total number of beauty modes existing in the APP is directly proportional to the number.
  • each anchor The structure of the recurrent neural network established by the live broadcast APP is different. Specifically, the number of learning actions performed by the recurrent neural network is positively related to the average complexity of the algorithm of various beauty modes in the live broadcast APP, as well as the input end of the recurrent neural network.
  • the number is proportional to the total number of beauty modes that exist in the anchor's live broadcast APP; third, the anchor's current video clip is deduplicated and time-evenly processed to obtain multi-frame images used as input content of the recurrent neural network, thereby improving execution recognition objectivity of reference data.
  • Figure 1 is a structural block diagram of a mobile terminal real content analysis system according to the first embodiment of the present invention.
  • Figure 2 is a structural block diagram of a mobile terminal real content parsing system according to the second embodiment of the present invention.
  • Figure 3 is a structural block diagram of a mobile terminal real content parsing system according to the third embodiment of the present invention.
  • Pre-installed software on mobile terminals generally refers to applications or software that come with the mobile terminal at the factory or are pre-installed in consumer mobile terminals through third-party flashing channels and that consumers cannot delete on their own.
  • third-party mobile APPs that users download and install from the mobile terminal application market.
  • the types of downloads are mainly social community software.
  • the video clips received are usually fake and special effects videos with beautification effects added by various different beauty modes provided by the anchor live broadcast APP.
  • the video content can be increased
  • the entertainment and beauty of the video also deceives users who watch these video clips to a certain extent.
  • At least some options for removing the beautification effect should be provided for users to choose when it is identified that the current video clip has a beautification mode.
  • the present invention builds a mobile terminal real content analysis system, which can effectively solve the corresponding technical problems.
  • the present invention has at least the following three significant technical advances:
  • each anchor The structure of the recurrent neural network established by the live broadcast APP is different. Specifically, the number of learning actions performed by the recurrent neural network is positively related to the average complexity of the algorithm of various beauty modes in the live broadcast APP, as well as the input end of the recurrent neural network.
  • the number is proportional to the total number of beauty modes that exist in the anchor's live broadcast APP; third, the anchor's current video clip is deduplicated and time-evenly processed to obtain multi-frame images used as input content of the recurrent neural network, thereby improving execution recognition objectivity of reference data.
  • Figure 1 is a structural block diagram of a mobile terminal real content parsing system according to the first embodiment of the present invention.
  • the system includes:
  • the video caching mechanism is installed in the mobile terminal running the anchor's live broadcast APP, and is used to receive the video clips sent from the live video server and cache the video clips as the current video clips;
  • a content selection mechanism connected to the video caching mechanism, is used to perform deduplication processing on each video frame that constitutes the current video segment to obtain a plurality of remaining frames;
  • a time-sharing processing mechanism is connected to the content selection mechanism, and is used to select a set total number of evenly timed remaining frames as the set total number according to multiple timestamps respectively corresponding to the plurality of remaining frames. Enter content;
  • a pattern identification device arranged in the mobile terminal and connected to the time-sharing processing mechanism, is used to input a set total number of input contents to the input end of the recurrent neural network and run the recurrent neural network to obtain the desired The beauty identification results output by the above-mentioned recurrent neural network;
  • a network generation device connected to the pattern identification device, used to generate the recurrent neural network required by the pattern identification device;
  • generating the recurrent neural network required by the pattern identification device includes: sending the recurrent neural network after performing each learning action to the pattern identification device to be used by the pattern identification device;
  • sending the recurrent neural network after each learning action is performed to the mode identification device for use by the mode identification device includes: the number of times the learning action is performed on the recurrent neural network and the various beauty modes that exist in the anchor's live broadcast APP.
  • the mean value of algorithm complexity is positively correlated;
  • the complexity of the algorithm can also be determined by the amount of calculations performed by a certain beauty mode. The more calculations performed by a certain beauty mode, the higher the complexity of the algorithm is determined;
  • selecting a set total number of evenly timed remaining pictures as a number of the set total number of remaining pictures includes: the value of the set total number and the host live broadcast
  • the total number of beauty modes existing in the APP is directly proportional to the number.
  • Figure 2 is a structural block diagram of a mobile terminal real content parsing system according to the second embodiment of the present invention.
  • the mobile terminal real content analysis system in Figure 2 may also include:
  • An auxiliary display device is provided in the mobile terminal and is respectively connected to the mode identification device and the video caching mechanism, and is used to display the mode while the anchor live broadcast APP displays the current video clip in real time on the display screen of the mobile terminal. Beauty identification results output by the identification device.
  • Figure 3 is a structural block diagram of a mobile terminal real content parsing system according to the third embodiment of the present invention.
  • the mobile terminal real content analysis system in Figure 3 may also include:
  • a timing control device is provided in the mobile terminal and is respectively connected to the auxiliary display device, the display screen and the video caching mechanism, and is used to implement the auxiliary display device, the display screen and the video caching mechanism.
  • realizing the synchronous action of the auxiliary display device and the video caching mechanism includes: the display action of the auxiliary display device and the display screen occurs after the caching action of the video caching mechanism.
  • the positive correlation between the number of learning actions performed by the recurrent neural network and the average complexity of the algorithm of various beauty modes in the live broadcast APP includes: the higher the numerical value of the average complexity of the algorithm of various beauty modes in the live broadcast APP The smaller, the fewer times the recurrent neural network is performed learning actions;
  • the value of the total number of settings is proportional to the total number of beauty modes that exist in the anchor's live broadcast APP, including: the fewer the total number of beauty modes that exist in the anchor's live broadcast APP, the smaller the value of the determined total number of settings.
  • Sending the recurrent neural network after each learning action is performed to the pattern identification device for use by the pattern identification device includes: using a video clip with an existing beauty identification result to perform each learning action, where the video is Several input contents corresponding to the set total number are input to the input terminal of the recurrent neural network, and the beauty identification result corresponding to the video is used as the output content of the output terminal of the recurrent neural network;
  • inputting a set total number of input contents to the input end of the recurrent neural network and running the recurrent neural network to obtain the beauty identification result output by the recurrent neural network includes: outputting FALSE corresponding to the recurrent neural network When the encoding value is used, it is judged that the anchor in the current video clip uses the beauty mode;
  • inputting a set total number of input contents to the input end of the recurrent neural network and running the recurrent neural network to obtain the beauty identification result output by the recurrent neural network includes: outputting the TURE corresponding to the recurrent neural network When the encoding value is used, it is judged that the anchor in the current video clip uses the bare-face mode.
  • the multiple timestamps respectively corresponding to the plurality of remaining pictures select a set number of remaining pictures at even time intervals of the total number as the number of the set total number.
  • the input content includes: the set number of remaining pictures at even time intervals of the total number.
  • Receiving the video clip sent from the live video server and caching the video clip as the current video clip includes: the live video server is a big data application network element;
  • receiving the video clip sent from the live video server and caching the video clip as the current video clip includes: caching the video clip in a first-in, first-out storage mode.
  • a video clip with existing beauty identification results is used to perform each learning action, in which a set total number of input contents corresponding to the video clip are input into the recurrent neural network
  • the input end of the video section uses the beauty identification result corresponding to the video section as the output end of the recurrent neural network.
  • the output content includes: the set total number of input content corresponding to the video section is to continuously pass the video section through the video cache mechanism and content.
  • a customized video frame selection mechanism and a multi-frame joint identification mode are implemented based on neural networks. Effective identification of the beauty mode is performed, thereby effectively maintaining the authenticity of the anchor picture at the user of the mobile terminal.

Abstract

本发明涉及一种移动终端真实内容解析系统,包括:视频缓存机构,设置在运行主播直播APP的移动终端内,用于接收从直播视频服务器发送的视频片段并将所述视频片段作为当前视频片段进行缓存;内容选择机构,用于对构成当前视频片段的各个视频画面执行去重处理,以获得多个剩余画面;模式鉴定器件,用于将设定总数的数个输入内容输入到循环神经网络的输入端并运行所述循环神经网络以获得所述循环神经网络输出的美颜鉴定结果。通过本发明,能够采用定制的视频帧选择机制以及多帧联合鉴别的模式实现基于神经网络执行美颜模式的有效鉴别,从而有效维护移动终端的用户处的主播画面的真实性。

Description

移动终端真实内容解析、内置反诈和欺诈判断系统及方法 技术领域
本发明涉及移动终端领域,尤其涉及一种移动终端真实内容解析系统。
背景技术
随着计算机技术的发展,移动终端从“以设备为中心”的模式进入“以人为中心”的模式,集成了嵌入式计算、控制技术、人工智能技术以及生物认证技术等,充分体现了以人为本的宗旨。由于软件技术的发展,移动终端可以根据个人需求调整设置,更加个性化。同时,移动终端本身集成了众多软件和硬件,功能也越来越强大。
现有技术中,在运行主播直播APP的移动终端内,接收到的视频片段通常是由主播直播APP提供的多种不同美颜模式添加美颜效果后的虚假、特效视频,虽然能够增加视频内容的娱乐性和美观性,也在一定程度上欺骗了观看这些视频片段的用户,至少应该提供一些去除美颜效果的选项以在鉴别到当前视频片段存在美颜模式时供用户选择。
发明内容
为了解决上述问题,本发明提供了一种移动终端真实内容解析系统,针对现有技术中难以在用户所在的移动终端对添加美颜效果后的虚假、特效视频进行是否添加美颜效果的鉴别操作的技术问题,采用定制的视频帧 选择机制以及多帧联合鉴别的模式基于神经网络执行美颜模式的有效鉴别,从而为移动终端的用户提供客观数据和更多选项。
根据本发明的一方面,提供了一种移动终端真实内容解析系统,所述系统包括:
视频缓存机构,设置在运行主播直播APP的移动终端内,用于接收从直播视频服务器发送的视频片段并将所述视频片段作为当前视频片段进行缓存;
内容选择机构,与所述视频缓存机构连接,用于对构成所述当前视频片段的各个视频画面执行去重处理,以获得多个剩余画面;
分时处理机构,与所述内容选择机构连接,用于根据所述多个剩余画面分别对应的多个时间戳选择设定总数的数个均匀时间间隔的剩余画面以作为设定总数的数个输入内容;
模式鉴定器件,设置在所述移动终端内且与所述分时处理机构连接,用于将设定总数的数个输入内容输入到循环神经网络的输入端并运行所述循环神经网络以获得所述循环神经网络输出的美颜鉴定结果;
网络生成器件,与所述模式鉴定器件连接,用于生成所述模式鉴定器件需要的循环神经网络;
其中,生成所述模式鉴定器件需要的循环神经网络包括:将执行各次学习动作后的循环神经网络发送给所述模式鉴定器件以待所述模式鉴定器件使用;
其中,将执行各次学习动作后的循环神经网络发送给所述模式鉴定器件以待所述模式鉴定器件使用包括:循环神经网络被执行学习动作的次数与主播直播APP存在的各种美颜模式的算法复杂度的均值正向关联;
其中,根据所述多个剩余画面分别对应的多个时间戳选择设定总数的 数个均匀时间间隔的剩余画面以作为设定总数的数个输入内容包括:设定总数的取值与主播直播APP存在的美颜模式的总数成正比。
由此可见,本发明至少具有以下三处显著的技术进步:
第一、在运行主播直播APP的移动终端,对主播当前视频片段是否使用美颜模式进行基于循环神经网络的多帧画面集中辨识,从而为使用移动终端用户提供真实信息;第二、每一主播直播APP建立的循环神经网络结构不同,具体地,循环神经网络被执行学习动作的次数与主播直播APP存在的各种美颜模式的算法复杂度的均值正向关联,以及循环神经网络的输入端的数量与主播直播APP存在的美颜模式的总数成正比;第三、将主播当前视频片段执行去重以及时间均匀处理以获得用作循环神经网络的输入内容的多帧画面,从而提升了执行辨识的参考数据的客观性。
附图说明
以下将结合附图对本发明的实施方案进行描述,其中:
图1为根据本发明第一实施方案示出的移动终端真实内容解析系统的结构方框图。
图2为根据本发明第二实施方案示出的移动终端真实内容解析系统的结构方框图。
图3为根据本发明第三实施方案示出的移动终端真实内容解析系统的结构方框图。
具体实施方式
下面将参照附图对本发明的移动终端真实内容解析系统的实施方案进行详细说明。
根据手机APP安装来源不同,又可分为移动终端预装软件和用户自己安装的第三方应用软件。移动终端预装软件一般指移动终端出厂自带、或第三方刷机渠道预装到消费者移动终端当中、且消费者无法自行删除的应用或软件。除了移动终端预装软件之外,还有用户从移动终端应用市场自己下载安装的第三方手机APP,下载类型主要集中在社交社区类软件。现有技术中,在运行主播直播APP的移动终端内,接收到的视频片段通常是由主播直播APP提供的多种不同美颜模式添加美颜效果后的虚假、特效视频,虽然能够增加视频内容的娱乐性和美观性,也在一定程度上欺骗了观看这些视频片段的用户,至少应该提供一些去除美颜效果的选项以在鉴别到当前视频片段存在美颜模式时供用户选择。
为了克服上述不足,本发明搭建了一种移动终端真实内容解析系统,能够有效解决相应的技术问题。
本发明至少具有以下三处显著的技术进步:
第一、在运行主播直播APP的移动终端,对主播当前视频片段是否使用美颜模式进行基于循环神经网络的多帧画面集中辨识,从而为使用移动终端用户提供真实信息;第二、每一主播直播APP建立的循环神经网络结构不同,具体地,循环神经网络被执行学习动作的次数与主播直播APP存在的各种美颜模式的算法复杂度的均值正向关联,以及循环神经网络的输入端的数量与主播直播APP存在的美颜模式的总数成正比;第三、将主播当前视频片段执行去重以及时间均匀处理以获得用作循环神经网络的输入内容的多帧画面,从而提升了执行辨识的参考数据的客观性。
图1为根据本发明第一实施方案示出的移动终端真实内容解析系统的结构方框图,所述系统包括:
视频缓存机构,设置在运行主播直播APP的移动终端内,用于接收从 直播视频服务器发送的视频片段并将所述视频片段作为当前视频片段进行缓存;
内容选择机构,与所述视频缓存机构连接,用于对构成所述当前视频片段的各个视频画面执行去重处理,以获得多个剩余画面;
分时处理机构,与所述内容选择机构连接,用于根据所述多个剩余画面分别对应的多个时间戳选择设定总数的数个均匀时间间隔的剩余画面以作为设定总数的数个输入内容;
模式鉴定器件,设置在所述移动终端内且与所述分时处理机构连接,用于将设定总数的数个输入内容输入到循环神经网络的输入端并运行所述循环神经网络以获得所述循环神经网络输出的美颜鉴定结果;
网络生成器件,与所述模式鉴定器件连接,用于生成所述模式鉴定器件需要的循环神经网络;
其中,生成所述模式鉴定器件需要的循环神经网络包括:将执行各次学习动作后的循环神经网络发送给所述模式鉴定器件以待所述模式鉴定器件使用;
其中,将执行各次学习动作后的循环神经网络发送给所述模式鉴定器件以待所述模式鉴定器件使用包括:循环神经网络被执行学习动作的次数与主播直播APP存在的各种美颜模式的算法复杂度的均值正向关联;
例如,当某一种美颜模式集成的美颜特效类型越多,其算法复杂度越高;
另外,还可以通过某一种美颜模式执行的运算量来确定其算法的复杂度,某一种美颜模式执行的运算量越多,确定的其算法的复杂度越高;
其中,根据所述多个剩余画面分别对应的多个时间戳选择设定总数的数个均匀时间间隔的剩余画面以作为设定总数的数个输入内容包括:设定 总数的取值与主播直播APP存在的美颜模式的总数成正比。
接着,继续对本发明的移动终端真实内容解析系统的具体结构进行进一步的说明。
图2为根据本发明第二实施方案示出的移动终端真实内容解析系统的结构方框图。
相比于本发明第一实施方案,图2中的移动终端真实内容解析系统还可以包括:
辅助显示器件,设置在所述移动终端内,与所述模式鉴定器件以及所述视频缓存机构分别连接,用于在主播直播APP在移动终端的显示屏实时显示当前视频片段的同时显示所述模式鉴定器件输出的美颜鉴定结果。
图3为根据本发明第三实施方案示出的移动终端真实内容解析系统的结构方框图。
相比于本发明第二实施方案,图3中的移动终端真实内容解析系统还可以包括:
时序控制器件,设置在所述移动终端内,分别与所述辅助显示器件、所述显示屏以及所述视频缓存机构连接,用于实现所述辅助显示器件、所述显示屏以及所述视频缓存机构同步动作;
其中,实现所述辅助显示器件以及所述视频缓存机构同步动作包括:所述辅助显示器件以及所述显示屏的显示动作发生在所述视频缓存机构的缓存动作之后。
根据本发明任一实施方案的移动终端真实内容解析系统中:
循环神经网络被执行学习动作的次数与主播直播APP存在的各种美颜模式的算法复杂度的均值正向关联包括:主播直播APP存在的各种美颜模式的算法复杂度的均值的数值越小、循环神经网络被执行学习动作的次 数越少;
其中,设定总数的取值与主播直播APP存在的美颜模式的总数成正比包括:主播直播APP存在的美颜模式的总数越少,确定的设定总数的取值越小。
根据本发明任一实施方案的移动终端真实内容解析系统中:
将执行各次学习动作后的循环神经网络发送给所述模式鉴定器件以待所述模式鉴定器件使用包括:采用已存在美颜鉴定结果的一段视频片段执行每一次学习动作,其中将该段视频对应的设定总数的数个输入内容输入到循环神经网络的输入端,将该段视频对应的美颜鉴定结果作为循环神经网络的输出端的输出内容;
其中,将设定总数的数个输入内容输入到循环神经网络的输入端并运行所述循环神经网络以获得所述循环神经网络输出的美颜鉴定结果包括:在所述循环神经网络输出FALSE对应的编码数值时,判断当前视频片段中的主播使用了美颜模式;
其中,将设定总数的数个输入内容输入到循环神经网络的输入端并运行所述循环神经网络以获得所述循环神经网络输出的美颜鉴定结果包括:在所述循环神经网络输出TURE对应的编码数值时,判断当前视频片段中的主播使用了素颜模式。
根据本发明任一实施方案的移动终端真实内容解析系统中:
根据所述多个剩余画面分别对应的多个时间戳选择设定总数的数个均匀时间间隔的剩余画面以作为设定总数的数个输入内容包括:设定总数的数个均匀时间间隔的剩余画面分别对应的数个时间戳两两时间间隔相等。
根据本发明任一实施方案的移动终端真实内容解析系统中:
接收从直播视频服务器发送的视频片段并将所述视频片段作为当前视频片段进行缓存包括:所述直播视频服务器为大数据应用网元;
其中,接收从直播视频服务器发送的视频片段并将所述视频片段作为当前视频片段进行缓存包括:采用先入先出的存储模式缓存所述视频片段。
另外,在所述移动终端真实内容解析系统中,采用已存在美颜鉴定结果的一段视频片段执行每一次学习动作,其中将该段视频对应的设定总数的数个输入内容输入到循环神经网络的输入端,将该段视频对应的美颜鉴定结果作为循环神经网络的输出端的输出内容包括:该段视频对应的设定总数的数个输入内容为将该段视频连续通过视频缓存机构、内容选择机构以及分时处理机构处理后获得的对应的设定总数的数个输入内容。
采用本发明的移动终端真实内容解析系统,针对现有技术中移动终端用户经常被虚假、美颜视频画面蒙蔽的技术问题,通过采用定制的视频帧选择机制以及多帧联合鉴别的模式实现基于神经网络执行美颜模式的有效鉴别,从而有效维护移动终端的用户处的主播画面的真实性。
如描述于此的,普通熟练的技术人员将认识到,在不背离本发明较宽范围的前提下,可以对本发明做出多种改变和修正。

Claims (10)

  1. 一种移动终端真实内容解析系统,其特征在于,所述系统包括:
    视频缓存机构,设置在运行主播直播APP的移动终端内,用于接收从直播视频服务器发送的视频片段并将所述视频片段作为当前视频片段进行缓存;
    内容选择机构,与所述视频缓存机构连接,用于对构成所述当前视频片段的各个视频画面执行去重处理,以获得多个剩余画面;
    分时处理机构,与所述内容选择机构连接,用于根据所述多个剩余画面分别对应的多个时间戳选择设定总数的数个均匀时间间隔的剩余画面以作为设定总数的数个输入内容;
    模式鉴定器件,设置在所述移动终端内且与所述分时处理机构连接,用于将设定总数的数个输入内容输入到循环神经网络的输入端并运行所述循环神经网络以获得所述循环神经网络输出的美颜鉴定结果;
    网络生成器件,与所述模式鉴定器件连接,用于生成所述模式鉴定器件需要的循环神经网络;
    其中,生成所述模式鉴定器件需要的循环神经网络包括:将执行各次学习动作后的循环神经网络发送给所述模式鉴定器件以待所述模式鉴定器件使用;
    其中,将执行各次学习动作后的循环神经网络发送给所述模式鉴定器件以待所述模式鉴定器件使用包括:循环神经网络被执行学习动作的次数与主播直播APP存在的各种美颜模式的算法复杂度的均值正向关联;
    其中,根据所述多个剩余画面分别对应的多个时间戳选择设定总数的数个均匀时间间隔的剩余画面以作为设定总数的数个输入内容包括:设定总数的取值与主播直播APP存在的美颜模式的总数成正比。
  2. 如权利要求1所述的移动终端真实内容解析系统,其特征在于,所述系统还包括:
    辅助显示器件,设置在所述移动终端内,与所述模式鉴定器件以及所述视频缓存机构分别连接,用于在主播直播APP在移动终端的显示屏实时显示当前视频片段的同时显示所述模式鉴定器件输出的美颜鉴定结果。
  3. 如权利要求2所述的移动终端真实内容解析系统,其特征在于,所述系统还包括:
    时序控制器件,设置在所述移动终端内,分别与所述辅助显示器件、所述显示屏以及所述视频缓存机构连接,用于实现所述辅助显示器件、所述显示屏以及所述视频缓存机构同步动作。
  4. 如权利要求3所述的移动终端真实内容解析系统,其特征在于:
    实现所述辅助显示器件以及所述视频缓存机构同步动作包括:所述辅助显示器件以及所述显示屏的显示动作发生在所述视频缓存机构的缓存动作之后。
  5. 如权利要求1-4任一所述的移动终端真实内容解析系统,其特征在于:
    循环神经网络被执行学习动作的次数与主播直播APP存在的各种美颜模式的算法复杂度的均值正向关联包括:主播直播APP存在的各种美颜模式的算法复杂度的均值的数值越小、循环神经网络被执行学习动作的次数越少;
    其中,设定总数的取值与主播直播APP存在的美颜模式的总数成正比包括:主播直播APP存在的美颜模式的总数越少,确定的设定总数的取值越小。
  6. 如权利要求1-4任一所述的移动终端真实内容解析系统,其特征在于:
    将执行各次学习动作后的循环神经网络发送给所述模式鉴定器件以待所述模式鉴定器件使用包括:采用已存在美颜鉴定结果的一段视频片段执行每一次学习动作,其中将该段视频对应的设定总数的数个输入内容输入到循环神经网络的输入端,将该段视频对应的美颜鉴定结果作为循环神经网络的输出端的输出内容。
  7. 如权利要求6所述的移动终端真实内容解析系统,其特征在于:
    将设定总数的数个输入内容输入到循环神经网络的输入端并运行所述循环神经网络以获得所述循环神经网络输出的美颜鉴定结果包括:在所述循环神经网络输出FALSE对应的编码数值时,判断当前视频片段中的主播使用了美颜模式。
  8. 如权利要求7所述的移动终端真实内容解析系统,其特征在于:
    将设定总数的数个输入内容输入到循环神经网络的输入端并运行所述循环神经网络以获得所述循环神经网络输出的美颜鉴定结果包括:在所述循环神经网络输出TURE对应的编码数值时,判断当前视频片段中的主播使用了素颜模式。
  9. 如权利要求1-4任一所述的移动终端真实内容解析系统,其特征在于:
    根据所述多个剩余画面分别对应的多个时间戳选择设定总数的数个均匀时间间隔的剩余画面以作为设定总数的数个输入内容包括:设定总数的数个均匀时间间隔的剩余画面分别对应的数个时间戳两两时间间隔相等。
  10. 如权利要求1-4任一所述的移动终端真实内容解析系统,其特征在于:
    接收从直播视频服务器发送的视频片段并将所述视频片段作为当前视频片段进行缓存包括:所述直播视频服务器为大数据应用网元;
    其中,接收从直播视频服务器发送的视频片段并将所述视频片段作为当前视频片段进行缓存包括:采用先入先出的存储模式缓存所述视频片段。
PCT/CN2022/113166 2022-06-21 2022-08-18 移动终端真实内容解析、内置反诈和欺诈判断系统及方法 WO2023245846A1 (zh)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN202210702348.XA CN114900712A (zh) 2022-06-21 2022-06-21 移动终端真实内容解析系统
CN202210702348.X 2022-06-21
CN202210736338.8 2022-06-27
CN202210736338.8A CN115022880A (zh) 2022-06-27 2022-06-27 移动终端内置反诈系统
CN202210815250 2022-07-12
CN202210815250.5 2022-07-12

Publications (1)

Publication Number Publication Date
WO2023245846A1 true WO2023245846A1 (zh) 2023-12-28

Family

ID=89378973

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/113166 WO2023245846A1 (zh) 2022-06-21 2022-08-18 移动终端真实内容解析、内置反诈和欺诈判断系统及方法

Country Status (1)

Country Link
WO (1) WO2023245846A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490031A (zh) * 2018-05-15 2019-11-22 武汉斗鱼网络科技有限公司 一种通用数字识别的方法、存储介质、电子设备及系统
WO2021114708A1 (zh) * 2019-12-09 2021-06-17 上海幻电信息科技有限公司 多人视频直播业务实现方法、装置、计算机设备
CN113382279A (zh) * 2021-06-15 2021-09-10 北京百度网讯科技有限公司 直播推荐方法、装置、设备、存储介质以及计算机程序产品
CN114900712A (zh) * 2022-06-21 2022-08-12 喻荣先 移动终端真实内容解析系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490031A (zh) * 2018-05-15 2019-11-22 武汉斗鱼网络科技有限公司 一种通用数字识别的方法、存储介质、电子设备及系统
WO2021114708A1 (zh) * 2019-12-09 2021-06-17 上海幻电信息科技有限公司 多人视频直播业务实现方法、装置、计算机设备
CN113382279A (zh) * 2021-06-15 2021-09-10 北京百度网讯科技有限公司 直播推荐方法、装置、设备、存储介质以及计算机程序产品
CN114900712A (zh) * 2022-06-21 2022-08-12 喻荣先 移动终端真实内容解析系统

Similar Documents

Publication Publication Date Title
US11657079B2 (en) System and method for identifying social trends
US10938725B2 (en) Load balancing multimedia conferencing system, device, and methods
CN110519636B (zh) 语音信息播放方法、装置、计算机设备及存储介质
CN113163272B (zh) 视频剪辑方法、计算机设备及存储介质
US20210312671A1 (en) Method and apparatus for generating video
CN107147939A (zh) 用于调整视频直播封面的方法和装置
US11303845B2 (en) Video content authentication
CN109189544B (zh) 用于生成表盘的方法和装置
US10015385B2 (en) Enhancing video conferences
CN110675433A (zh) 视频处理方法、装置、电子设备及存储介质
WO2020259449A1 (zh) 一种短视频的生成方法及装置
CN112637670B (zh) 视频生成方法及装置
CN114900712A (zh) 移动终端真实内容解析系统
CN113242361B (zh) 一种视频处理方法、装置以及计算机可读存储介质
JP2023539620A (ja) 顔画像の処理方法、表示方法、装置及びコンピュータプログラム
CN110659604A (zh) 视频检测方法、装置、服务器及存储介质
US20050289176A1 (en) Application sharing smoothness
CN112785488A (zh) 一种图像处理方法、装置、存储介质及终端
JP2024518227A (ja) データ処理方法、装置、機器及びコンピュータプログラム
WO2023245846A1 (zh) 移动终端真实内容解析、内置反诈和欺诈判断系统及方法
CN112669244A (zh) 人脸图像增强方法、装置、计算机设备以及可读存储介质
CN110300118B (zh) 流媒体处理方法、装置及存储介质
WO2015033431A1 (ja) アノテーション装置及びアノテーションシステム
CN106911944A (zh) 对媒体流进行处理以便在多个端点同步输出
CN113905177A (zh) 视频生成方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22947587

Country of ref document: EP

Kind code of ref document: A1