CN111654752A - Multimedia information playing method, device and related equipment - Google Patents

Multimedia information playing method, device and related equipment Download PDF

Info

Publication number
CN111654752A
CN111654752A CN202010600706.7A CN202010600706A CN111654752A CN 111654752 A CN111654752 A CN 111654752A CN 202010600706 A CN202010600706 A CN 202010600706A CN 111654752 A CN111654752 A CN 111654752A
Authority
CN
China
Prior art keywords
information
target
target object
multimedia
multimedia information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010600706.7A
Other languages
Chinese (zh)
Other versions
CN111654752B (en
Inventor
黄杰怡
黄其亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010600706.7A priority Critical patent/CN111654752B/en
Publication of CN111654752A publication Critical patent/CN111654752A/en
Application granted granted Critical
Publication of CN111654752B publication Critical patent/CN111654752B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4627Rights management associated to the content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4667Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Social Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure provides a multimedia information playing method, a multimedia information playing device, an electronic device and a computer readable storage medium, wherein the method comprises the following steps: determining a target object; acquiring target portrait information of the target object, wherein the target portrait information comprises multimedia acceptance degree information of the target object, and the target portrait information is determined according to historical behavior information of the target object for historical multimedia information; acquiring first multimedia information, wherein the first multimedia information comprises target content; and playing the first multimedia information according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object. According to the technical scheme provided by the embodiment of the disclosure, the refined multimedia acceptance degree grading can be carried out on the target object, the target multimedia is played and played to the target object according to the refined grading so as to want, and the preference requirements of different objects are effectively considered.

Description

Multimedia information playing method, device and related equipment
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to a multimedia information playing method, apparatus, and related device.
Background
Watching horror films can meet the psychological needs of people for pursuing stimulation, but the content of over stimulation can cause psychological and physiological discomfort.
In the related art, different target objects are generally classified into viewing levels based on age attributes, so as to determine whether the target object is suitable for viewing a target horror film according to the viewing levels of the target objects.
However, the operation of rating the target object based on the age attribute is too coarse, and the receptivity of the adult objects to the stimulus content is greatly different, so that the related art cannot effectively cover the requirements of different stimulus-preferred objects.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The embodiment of the disclosure provides a multimedia information playing method and device, an electronic device and a computer readable storage medium, which can play a target multimedia to a target object according to the multimedia acceptance degree of the target object, and effectively meet the requirements of different preference objects.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
The embodiment of the disclosure provides a multimedia information playing method, which includes: determining a target object; acquiring target portrait information of the target object, wherein the target portrait information comprises multimedia acceptance degree information of the target object, and the target portrait information is determined according to historical behavior information of the target object for historical multimedia information; acquiring first multimedia information, wherein the first multimedia information comprises target content; and playing the first multimedia information according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object.
The embodiment of the disclosure provides a multimedia information playing method, which includes: acquiring historical behavior information of the target object aiming at the second multimedia information; determining a target portrait of the target object according to the historical behavior information, wherein the target portrait comprises multimedia acceptance degree information of the target object; and playing the target multimedia information to the target object according to the multimedia acceptance degree information of the target object.
The disclosed embodiment provides a multimedia information playing device, which includes: the system comprises a target object determining module, a target portrait determining module, a first multimedia information obtaining module and a first playing module.
Wherein the target object determination module may be configured to determine a target object. The target representation determination module may be configured to obtain target representation information for the target object, the target representation information including multimedia acceptance information for the target object, wherein the target representation information is determined based on historical behavior information of the target object with respect to historical multimedia information. The first multimedia information obtaining module may be configured to obtain first multimedia information, where the first multimedia information includes target content. The first playing module may be configured to play the first multimedia information according to the target content of the first multimedia information and the multimedia acceptance information of the target object.
In some embodiments, the first playing module may include: the device comprises a first prompt information determining unit and a first prompt information display unit.
The first prompt information determining unit may be configured to determine the first prompt information according to the target content of the first multimedia information and the multimedia acceptance level information of the target object. The first prompt information presentation unit may be configured to present the first prompt information before playing the first multimedia information.
In some embodiments, the target representation information further includes a first sensitive element of the target object
In some embodiments, the first playing module may include: a desensitizing unit.
In fact, the desensitization unit may be configured to perform desensitization processing on a first sensitive element of the target object when the first multimedia information is played if the first multimedia information includes the first sensitive element.
In some embodiments, the target representation information includes a second sensitive element of the target object.
In some embodiments, the first playing module may include: a pause unit and a second prompt message display unit.
The pause unit may be configured to pause the playing of the first multimedia information before playing the second sensitive element if the first multimedia information includes the second sensitive element of the target object. The second prompt information presentation unit may be configured to present second prompt information.
In some embodiments, the target representation information further includes a third sensitive element of the target object.
In some embodiments, the first playing module may include: a substitute multimedia information acquisition unit and a substitute unit.
Wherein the alternative multimedia information obtaining unit may be configured to obtain alternative multimedia information if the first multimedia information includes a third sensitive element of the target object. The replacing unit may be configured to replace the playing of the segment in which the third sensitive element is located with the replacing multimedia information.
In some embodiments, the multimedia information playing apparatus may further include: and a recommendation module.
Wherein the recommendation module may be configured to recommend multimedia information to the target object based on the target representation information of the target object.
Since each functional module of the multimedia information playing apparatus according to the exemplary embodiment of the present disclosure corresponds to the step of the exemplary embodiment of the multimedia information playing method, it is not described herein again.
The disclosed embodiment provides a multimedia information playing device, which includes: the system comprises a historical behavior determination information module, a target portrait determination module and a second playing module.
Wherein the historical behavior determination information module may be configured to obtain historical behavior information of the target object for the second multimedia information. The target representation determination module may be configured to determine a target representation of the target object based on the historical behavior information, the target representation including multimedia acceptance information for the target object. The second playing module may be configured to play the target multimedia information to the target object according to the multimedia acceptance information of the target object.
In some embodiments, the second multimedia information is target video information, the target representation further includes a sensitive scene of the target object, and the historical behavior information includes: stress response information of the target object and first time information corresponding to the stress response information.
In some embodiments, the target representation determination module may include: a target frame image determining unit and a sensitive scene determining unit.
The target frame image determining unit may be configured to determine a target frame image in the target video according to the stress response information of the target object and the first time information corresponding to the stress response information. The sensitive scene determination module may be configured to determine a sensitive scene of the target object from the target frame image.
In some embodiments, the target representation further includes a first target sensitive element of the target object, and the historical behavior information further includes eye movement information of the target object.
In some embodiments, the target representation determination module may include a first target sensitive element determination unit.
Wherein the first target sensitive element determining unit may be configured to determine the first target sensitive element of the target object in the target frame image according to eye movement information of the target object.
In some embodiments, the eye movement information includes position information and movement information of an eyeball of the target object.
In some embodiments, the first target sensitive element determining unit may include: a line-of-sight region determination subunit and a first target sensitive element determination subunit.
Wherein the gaze region determination subunit may be configured to determine a gaze region of the target object in the target frame image according to the position information of the eyeball. The first target sensitive element determining subunit may be configured to determine the first target sensitive element of the target object according to the gaze area of the target object and the movement information of the eyeball.
In some embodiments, the second multimedia information is target audio information, the target representation further includes a second target sensitive element of the target object, and the historical behavior information includes stress response information of the target object and second time information corresponding to the stress response information.
In some embodiments, the target representation determination module may include: and a second target sensitive element determining unit.
The second target sensitive element determining unit may be configured to determine a second target sensitive element of the target object in the target audio according to the stress response information of the target object and second time information corresponding to the stress response information.
An embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device, configured to store one or more programs, and when the one or more programs are executed by the one or more processors, enable the one or more processors to implement any of the above multimedia information playing methods.
The embodiment of the present disclosure provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the multimedia information playing method according to any one of the above items.
Embodiments of the present disclosure provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device executes the multimedia information playing method of any one of the above.
According to the multimedia information playing method and device, the electronic device and the computer readable storage medium, the multimedia acceptance degree information of the target object is determined according to the historical behavior information of the target object aiming at the historical multimedia information, and the first multimedia information is played to the target object according to the multimedia acceptance degree information of the target object. According to the technical scheme provided by the embodiment, the acceptance degree of the target object to the multimedia is finely graded according to the historical behavior of the target object, the requirements of different target objects are met, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. The drawings described below are merely some embodiments of the present disclosure, and other drawings may be derived from those drawings by those of ordinary skill in the art without inventive effort.
Fig. 1 is a schematic diagram illustrating an exemplary system architecture of a multimedia information playing method or a multimedia information playing apparatus applied to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram illustrating a structure of a computer system applied to a multimedia information playing apparatus according to an exemplary embodiment.
Fig. 3 is a flow chart illustrating a method of playing multimedia information according to an exemplary embodiment.
Fig. 4 is a flow chart illustrating a method of playing multimedia information according to an exemplary embodiment.
FIG. 5 is a diagram illustrating a first prompt message, according to an example embodiment.
Fig. 6 is a flow chart illustrating a method of playing multimedia information according to an exemplary embodiment.
FIG. 7 is a diagram illustrating a coding scheme in accordance with an exemplary embodiment.
Fig. 8 is a flowchart illustrating a multimedia information playing method according to an exemplary embodiment.
FIG. 9 illustrates a second toast presentation according to an exemplary embodiment.
Fig. 10 is a flow chart illustrating a method of playing multimedia information according to an exemplary embodiment.
Fig. 11 is a flow chart illustrating a method of playing multimedia information according to an exemplary embodiment.
FIG. 12 illustrates a movie recommendation interface in accordance with an exemplary embodiment.
Fig. 13 is a flowchart illustrating a multimedia information playing method according to an exemplary embodiment.
Fig. 14 is a multimedia playback diagram according to an example embodiment.
Fig. 15 is a schematic diagram illustrating a multimedia playback according to an exemplary embodiment.
Fig. 16 is a flowchart of step S02 in fig. 13 in an exemplary embodiment.
FIG. 17 is a flowchart of step S022 of FIG. 16 in an exemplary embodiment.
FIG. 18 is a flowchart of step S03 of FIG. 13 in an exemplary embodiment.
Fig. 19 illustrates a multimedia playback method according to an exemplary embodiment.
Fig. 20 is a block diagram illustrating a multimedia information playing apparatus according to an exemplary embodiment.
Fig. 21 is a block diagram illustrating a multimedia information playing apparatus according to an exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.
The described features, structures, or characteristics of the disclosure may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
The drawings are merely schematic illustrations of the present disclosure, in which the same reference numerals denote the same or similar parts, and thus, a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and steps, nor do they necessarily have to be performed in the order described. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
In this specification, the terms "a", "an", "the", "said" and "at least one" are used to indicate the presence of one or more elements/components/etc.; the terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. other than the listed elements/components/etc.; the terms "first," "second," and "third," etc. are used merely as labels, and are not limiting on the number of their objects.
The following detailed description of exemplary embodiments of the disclosure refers to the accompanying drawings.
Fig. 1 is a schematic diagram showing an exemplary system architecture of a multimedia information playing method or a multimedia information playing apparatus that can be applied to the embodiments of the present disclosure.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to accept or send messages or the like. The terminal devices 101, 102, 103 may be various electronic devices having display screens and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, wearable devices, virtual reality devices, smart homes, and the like.
The terminal device may, for example, determine the target object; the terminal equipment can for example obtain target portrait information of the target object, wherein the target portrait information comprises multimedia acceptance degree information of the target object, and the target portrait information is determined according to historical behavior information of the target object for historical multimedia information; the terminal device may, for example, obtain first multimedia information, the first multimedia information including the target content; the terminal device can play the first multimedia information according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object.
The server 105 may be a server that provides various services, such as a background management server that provides support for devices operated by users using the terminal apparatuses 101, 102, 103. The background management server can analyze and process the received data such as the request and feed back the processing result to the terminal equipment.
The server 105 may, for example, obtain historical behavior information of the target object for the second multimedia information; server 105 may determine a target representation of the target object, for example, based on the historical behavior information, the target representation including multimedia acceptance information for the target object; the server 105 may play the target multimedia information to the target object, for example, according to the multimedia receptivity information of the target object.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is only illustrative, and the server 105 may be a physical server or may be composed of a plurality of servers, and there may be any number of terminal devices, networks and servers according to actual needs.
Referring now to FIG. 2, a block diagram of a computer system 200 suitable for implementing a terminal device of the embodiments of the present application is shown. The terminal device shown in fig. 2 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 2, the computer system 200 includes a Central Processing Unit (CPU)201 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)202 or a program loaded from a storage section 208 into a Random Access Memory (RAM) 203. In the RAM 203, various programs and data necessary for the operation of the system 200 are also stored. The CPU 201, ROM 202, and RAM 203 are connected to each other via a bus 204. An input/output (I/O) interface 205 is also connected to bus 204.
The following components are connected to the I/O interface 205: an input portion 206 including a keyboard, a mouse, and the like; an output section 207 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 208 including a hard disk and the like; and a communication section 209 including a network interface card such as a LAN card, a modem, or the like. The communication section 209 performs communication processing via a network such as the internet. A drive 210 is also connected to the I/O interface 205 as needed. A removable medium 211, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is mounted on the drive 210 as necessary, so that a computer program read out therefrom is installed into the storage section 208 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 209 and/or installed from the removable medium 211. The above-described functions defined in the system of the present application are executed when the computer program is executed by the Central Processing Unit (CPU) 201.
It should be noted that the computer readable storage medium shown in the present application can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules and/or units and/or sub-units described in the embodiments of the present application may be implemented by software, and may also be implemented by hardware. The described modules and/or units and/or sub-units may also be provided in a processor, and may be described as: a processor includes a transmitting unit, an obtaining unit, a determining unit, and a first processing unit. Wherein the names of such modules and/or units and/or sub-units in some cases do not constitute a limitation on the modules and/or units and/or sub-units themselves.
As another aspect, the present application also provides a computer-readable storage medium, which may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable storage medium carries one or more programs which, when executed by a device, cause the device to perform functions including: determining a target object; acquiring target portrait information of the target object, wherein the target portrait information comprises multimedia acceptance degree information of the target object, and the target portrait information is determined according to historical behavior information of the target object for historical multimedia information; acquiring first multimedia information, wherein the first multimedia information comprises target content; and playing the first multimedia information according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object.
The technical scheme provided by the embodiment of the disclosure can also relate to computer vision, voice, machine learning and other technologies in the artificial intelligence technology.
Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
The key technologies of Speech Technology (Speech Technology) are Automatic Speech Recognition (ASR) and Speech synthesis (TTS) and voiceprint Recognition. The computer can listen, see, speak and feel, and the development direction of the future human-computer interaction is provided, wherein the voice becomes one of the best viewed human-computer interaction modes in the future.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
Fig. 3 is a flow chart illustrating a method of playing multimedia information according to an exemplary embodiment. The method provided by the embodiment of the present disclosure may be processed by any electronic device with computing processing capability, for example, the terminal devices 102 and 103 in the embodiment of fig. 1 described above, and in the following embodiments, the terminal devices 102 and 103 are taken as an example for illustration, but the present disclosure is not limited thereto.
Referring to fig. 3, a multimedia information playing method provided by an embodiment of the present disclosure may include the following steps.
In some embodiments, the multimedia information may generally include various media forms such as text information, sound information, image information, or video information, which is not limited by this disclosure.
In step S1, a target object is determined.
In some embodiments, the target object may refer to a user who needs to watch a target video, a target text, or a target picture, and may also refer to a user who needs to listen to a target sound, which is not limited by this disclosure.
In some embodiments, the target object may be determined by capturing an image, a voice, an identification of the target object, and the like.
In some embodiments, the image may be processed by image detection techniques to determine the target object; the acquired image information can be subjected to training by the trained neural network model to determine a target object; the collected sound may also be processed by automatic speech recognition techniques to determine the target object.
It is to be understood that the present disclosure is not limited to the above-described method of determining a target object.
In step S2, target representation information of the target object is obtained, the target representation information including multimedia acceptance information of the target object, wherein the target representation information is determined according to historical behavior information of the target object for historical multimedia information.
In some embodiments, the historical multimedia information may refer to related multimedia information (e.g., video, audio, or pictures, etc.) that the target object watched or listened to in a past period of time, and may also refer to test multimedia information that is played before the first multimedia information is played, which is not limited by the present disclosure.
In some embodiments, the historical behavior information of the target object with respect to the historical multimedia information may refer to some stress responses of the target object when viewing or listening to the historical multimedia information, such as screaming (e.g., high-call "heaven", "frighten me", etc.), mouth covering, abrupt gaze diversion, abrupt pupil dilation, etc., which is not limited by the present disclosure.
For example, stress responses of a target object viewing a target thriller sheet within the past month may be collected to generate portrait information of the target object for the thriller sheet; stress responses of the target object when listening to the 'ghost story' within the last three months can be collected to generate portrait information of the target object for the 'ghost story'; stress responses of the target object when watching the test video/audio can be collected to generate portrait information of the target maximum aiming at the target video/target audio, and the collection of the historical behavior information is not limited by the disclosure.
In some embodiments, the historical behavior information of the target object with respect to the historical multimedia information may refer to a stress response of the target object when viewing or listening to the relevant multimedia information and time information of the stress response, for example, a mouth-covering behavior of the target object when viewing a horror sheet and time information corresponding to the mouth-covering behavior; for example, the ear covering behavior of the target object and the corresponding time information are made when the target object listens to the ghost story; and for example, the screaming behavior of the target object when the target object watches the picture and the corresponding time information.
In some embodiments, the portrait information of each object may be determined in advance by historical behavior information of different objects with respect to historical multimedia information, and then stored in advance. When the target object is determined, target image information for the target object may be determined from pre-stored image information.
In some embodiments, the multimedia acceptance information in the portrait information of the target object may be a classification information, for example, the multimedia acceptance of the target object may be classified into a heavy preference stimulus, a light preference stimulus, and a stimulus avoidance category, which is not limited by the present disclosure.
In step S3, first multimedia information is acquired, the first multimedia information including target content.
In some embodiments, the first multimedia information may refer to multimedia information to be played, such as video, audio, or pictures to be played, which is not limited by the present disclosure.
It is understood that, before playing the first multimedia information, the target content of the first multimedia information and the stimulation degree of the target content may be known in advance, so as to play the target multimedia information to the target object according to the target content and the multimedia bearing degree information of the target object.
In some embodiments, the target content of the first multimedia information may be classified in advance, for example, as follows: stimulation, general stimulation, non-stimulation, and the like, and the disclosure is not limited thereto.
In step S4, the first multimedia information is played according to the target content of the first multimedia information and the multimedia acceptance information of the target object.
In some embodiments, how to play the first multimedia information to the target object may be determined according to the stimulation degree of the first multimedia information and the multimedia acceptance degree information of the target object.
In some embodiments, before playing the first multimedia information, a prompt may be played to the target object according to the stimulation level of the first multimedia information and the media receiving level information of the target object.
For example, assuming that the stimulation level of the first multimedia information is extreme stimulation, and the multimedia acceptance level of the target object is only mild preference stimulation, the target object may be prompted when playing the first multimedia information: "the XX content is extremely irritating and may not be suitable for you, please consider whether to continue", or play the content of the part of the first multimedia information after the prompt is completed so that the target object decides whether to continue; for another example, assuming that the stimulation degree of the first multimedia information is mild stimulation, and the multimedia acceptance degree of the target object is severe preference stimulation, the target object may be prompted to "the XX content is not enough stimulation and may not be suitable for you, please consider whether to select other XX" when playing the first multimedia information; for example, if the stimulation level of the first multimedia message is heavy stimulation and the multimedia receiving level of the target object is heavy preference stimulation, the target object may be prompted with a prompt such as "XX content is extremely stimulated to meet your demand and let us feel a heartbeat together" when playing the first multimedia message.
According to the technical scheme provided by the embodiment, the multimedia acceptance degree information of the target object is determined according to the historical behavior information of the target object aiming at the historical multimedia information, and the first multimedia information is played to the target object according to the multimedia acceptance degree information of the target object. According to the technical scheme provided by the embodiment, the receiving degrees of the multimedia of different objects can be finely graded according to the historical behaviors of the different objects, the first multimedia information is played to the target object according to the graded receiving degree of the multimedia, so that different requirements of the different target objects can be covered, and the user experience is improved.
Fig. 4 is a flow chart illustrating a method of playing multimedia information according to an exemplary embodiment. Referring to fig. 4, the multimedia information playing method may include the following steps.
In step S1, a target object is determined.
In step S2, target representation information of the target object is obtained, the target representation information including multimedia acceptance information of the target object, wherein the target representation information is determined according to historical behavior information of the target object for historical multimedia information.
In step S3, first multimedia information is acquired, the first multimedia information including target content.
In step S411, first prompt information is determined according to the target content of the first multimedia information and the multimedia acceptance information of the target object.
In step S412, before the first multimedia message is played, the first prompt message is presented.
In some embodiments, before playing the first multimedia information, a prompt may be played to the target object according to the stimulation level of the first multimedia information and the media receiving level information of the target object.
For example, assuming that the stimulation level of the first multimedia information is extreme stimulation, and the multimedia acceptance level of the target object is only mild preference stimulation, the target object may be prompted when playing the first multimedia information: "the XX content is extremely irritating and may not be suitable for you, please consider whether to continue", or play the content of the part of the first multimedia information after the prompt is completed so that the target object decides whether to continue; for another example, assuming that the stimulation degree of the first multimedia information is mild stimulation, and the multimedia acceptance degree of the target object is severe preference stimulation, the target object may be prompted to "the XX content is not enough stimulation and may not be suitable for you, please consider whether to select other XX" when playing the first multimedia information; for example, if the stimulation level of the first multimedia message is heavy stimulation and the multimedia receiving level of the target object is heavy preference stimulation, the target object may be prompted with a prompt such as "XX content is extremely stimulated to meet your demand and let us feel a heartbeat together" when playing the first multimedia message.
FIG. 5 is a diagram illustrating a first prompt message, according to an example embodiment.
In some embodiments, the first multimedia information may be a target video or a target picture. Assuming that the first multimedia information is the target video, if the multimedia bearing information of the target object does not match the stimulation level of the first multimedia information, a first prompt message "your stimulation preference level is C stimulation evacuee | as shown in fig. 5 can be presented to the target object! Users above 66% of this rating feel uncomfortable with the current movie ". After seeing the first prompt message, the user can click a 'continue watching' button to continue watching the video, and the user can click a 'change' button to change other videos. It is to be understood that the present disclosure is not limited to the content of the first prompt message.
According to the technical scheme provided by the embodiment, whether the first multimedia information is suitable for the target object is determined according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object, and the first prompt information is played to the target object, so that the target object determines whether to continue playing according to the first prompt. According to the technical scheme provided by the embodiment, the bearing degree information of the target object for the similar multimedia information is accurately and effectively determined according to the historical behavior information of the target object for the historical multimedia information, and the first prompt information is played to the target object according to the multimedia bearing degree information of the target object. The technical scheme provided by the embodiment fully considers different requirements of different objects, and improves user experience.
Fig. 6 is a flow chart illustrating a method of playing multimedia information according to an exemplary embodiment. In some embodiments, the target representation information of the target object may also include a first sensitive element of the target object.
In some embodiments, the first sensitive element of the target object may refer to an element that is not adapted to the target object, such as a ghost element, a zombie element, a bloody smell element, etc. in a video, or a creaky voice of opening a door in an audio, "crying" voice, howling, etc., which is not limited by this disclosure.
Referring to fig. 6, the multimedia information playing method may include the following steps.
In step S1, a target object is determined.
In step S2, target representation information of the target object is obtained, the target representation information including multimedia acceptance information of the target object, wherein the target representation information is determined according to historical behavior information of the target object for historical multimedia information.
In step S3, first multimedia information is acquired, the first multimedia information including target content.
In step S421, if the first multimedia information includes a first sensitive element of the target object, performing desensitization processing on the first sensitive element when the first multimedia information is played.
In some embodiments, the first multimedia information may refer to video information, audio information, or picture information, etc., which is not limited by this disclosure.
In some embodiments, if the first multimedia information is video information and the first sensitive element of the target object is included in the target frame image in the video, desensitizing (e.g., coding, overlaying or matting a picture, etc.) the sensitive element in the target frame image; if the first multimedia information is audio information and a certain piece of music in the audio information comprises a first sensitive element of a target object, performing desensitization processing (such as silencing, audio covering and the like) on the audio piece where the first sensitive element is located. It is understood that the desensitization process in the above embodiments is not limited by the present disclosure, which is subject to practical requirements.
In some embodiments, the target representation information of the target object may further include sensitive scene information of the target object, and the sensitive scene may refer to a "killer scene", "fighting scene", "cemetery scene", and the like, which are not limited by the present disclosure.
In some embodiments, the first multimedia information may be played to the target object according to a sensitive scene of the target object. For example, if the first multimedia information to be played includes a sensitive scene of the target object, the target object may be reminded before the sensitive scene is played; relatively soothing music can also be played while the sensitive scene is played; and when the sensitive scene is played, desensitizing the frame image related to the sensitive scene. The method for playing the sensitive scene in the first multimedia information is not limited, and any playing method capable of relieving the emotion of the target object is within the protection scope of the disclosure.
FIG. 7 is a diagram illustrating a coding scheme in accordance with an exemplary embodiment. As shown in fig. 7, if the target frame image in the first media information includes a first sensitive element to which the target object is sensitive, the first sensitive element may be subjected to coding processing in the target frame image. The effect of coding the first sensitive element is shown at 701 in fig. 7.
Fig. 8 is a flowchart illustrating a multimedia information playing method according to an exemplary embodiment.
In some embodiments, the target representation information of the target object may include a second sensitive element of the target object, where the second sensitive element may refer to an element to which the target object is extremely sensitive. For example, assuming that the target object is extremely afraid of ghost elements, zombie elements, bloody fishy elements, etc., the ghost elements, zombie elements, bloody fishy elements, etc. may be the second sensitive elements of the target object.
It is understood that extreme sensitivity is a relative adjective used to indicate that the target subject reacts to a greater degree with a second sensitive element than with other sensitive elements.
Referring to fig. 8, the multimedia information playing method may include the following steps.
In step S1, a target object is determined.
In step S2, target representation information of the target object is obtained, the target representation information including multimedia acceptance information of the target object, wherein the target representation information is determined according to historical behavior information of the target object for historical multimedia information.
In step S3, first multimedia information is acquired, the first multimedia information including target content.
In step S431, if the first multimedia information includes the second sensitive element of the target object, the playing of the first multimedia information is paused before the playing of the second sensitive element.
Generally, when a second sensitive element to which the target object is extremely sensitive exists in the first multimedia information, the target object may be unwilling to see or hear the second sensitive element.
In some embodiments, if it is determined that the first multimedia message includes the second sensitive element that is extremely sensitive to the target object, the playing may be automatically paused before the multimedia segment in which the second sensitive element is played.
In step S432, the second prompt information is acquired.
In some embodiments, the second prompt is, for example, a prompt that prompts the target object that may have extremely inappropriate content next and asks the target object whether to skip, etc., but the disclosure is not limited thereto.
In step S433, the second prompt message is displayed.
In some embodiments, the first multimedia information may refer to a target video. FIG. 9 illustrates a second toast presentation according to an exemplary embodiment. As shown in fig. 9, when the second sensitive element of the target object is about to appear during the process of playing the target video, the playing of the target video may be paused, and a second prompt message "the following video may cause discomfort and ask whether to continue" as shown in fig. 9 is presented, and the target object may decide whether to continue to watch the video through the "cancel" and "continue" buttons.
It is understood that, if the first media information is audio information, the second prompt information may be presented to the target object by voice broadcast, and the same target object may decide whether to continue listening to the audio information by voice instruction.
According to the technical scheme provided by the embodiment of the disclosure, the extremely sensitive second sensitive element of the target object is automatically determined through the historical behavior information of the target object, the playing of the first multimedia information is paused before the second sensitive element is played, and the target object is prompted, so that the target object can determine whether to continue playing. By the method, psychological construction can be performed on the target object before the second sensitive element is played, and the target object is prevented from being frightened by sudden appearance of the second sensitive element.
Fig. 10 is a flow chart illustrating a method of playing multimedia information according to an exemplary embodiment. In some embodiments, the target representation information of the target object may also include a third sensitive element of the target object.
In some embodiments, the first sensitive element of the target object may refer to an element that is not adapted to the target object, such as a ghost element, a zombie element, a bloody smell element, etc. in a video, or a creaky voice of opening a door in an audio, "crying" voice, howling, etc., which is not limited by this disclosure.
In step S1, a target object is determined.
In step S2, target representation information of the target object is obtained, the target representation information including multimedia acceptance information of the target object, wherein the target representation information is determined according to historical behavior information of the target object for historical multimedia information.
In step S3, first multimedia information is acquired, the first multimedia information including target content.
In step S441, if the first multimedia information includes the third sensitive element of the target object, alternative multimedia information is obtained.
In step S442, the substitute multimedia information is used to substitute for the playing of the segment in which the third sensitive element is located.
In some embodiments, if the first multimedia information includes a third sensitive element of the target object, the multimedia segment in which the third sensitive element is located may be replaced with the replacement multimedia information.
In some embodiments, if the first multimedia is a target video and the third sensitive element is present in the target frame image, the target frame image may be replaced by a replacement frame image such that the replacement frame image is played when the target frame image should be played.
In some embodiments, since the multimedia information may be formed by fusing a plurality of media information, for example, the video multimedia may be formed by fusing a picture and music, it may be further determined which media of the target multimedia information the third sensitive element exists in, and replace the media segment information where the third sensitive element exists with the corresponding substitute multimedia information.
For example, assuming that the third sensitive element exists in the music information of the target video, the audio piece in which the third sensitive element exists may be replaced by a music piece that can ease the emotion.
In some embodiments, if the first multimedia information is the target audio information and the third sensitive element exists in an audio segment, the audio segment can be replaced by a substitute audio segment.
According to the technical scheme provided by the embodiment, the multimedia segment where the third sensitive element is located is replaced by replacing the multimedia information, so that the target object can be prevented from being frightened, and the emotion of the target object is relieved
Fig. 11 is a flow chart illustrating a method of playing multimedia information according to an exemplary embodiment. Referring to fig. 11, the multimedia information playing method may include the following steps.
In step S1, a target object is determined.
In step S2, target representation information of the target object is obtained, the target representation information including multimedia acceptance information of the target object, wherein the target representation information is determined according to historical behavior information of the target object for historical multimedia information.
In step S3, first multimedia information is acquired, the first multimedia information including target content.
In step S4, the first multimedia information is played according to the target content of the first multimedia information and the multimedia acceptance information of the target object.
In step S5, multimedia information is recommended to the target object based on the target image information of the target object.
In some embodiments, the corresponding multimedia information may be recommended to the target object according to the multimedia acceptance information in the target representation of the target object and the first sensitive element information. For example, assuming that the multimedia acceptance degree of the target object is a mild preference stimulus, multimedia information that is a general stimulus and does not include a first sensitive element that the target object is sensitive to may be recommended to the target object, and the specific recommendation method is not limited by the embodiments of the present disclosure.
FIG. 12 illustrates a movie recommendation interface in accordance with an exemplary embodiment. As shown in fig. 12, if the target object finishes viewing the current movie, other videos may be recommended to the target object according to the multimedia tolerance information of the target object. As shown in fig. 12, the target object may select a favorite video for playing according to the recommendation, or may click a "replay" button to re-watch the current movie.
The technical scheme provided by the embodiment can recommend the multimedia information according to the portrait information of different target objects, so that the click rate of the multimedia information can be increased, and the requirements of the target objects can be met.
Fig. 13 is a flowchart illustrating a multimedia information playing method according to an exemplary embodiment. Referring to fig. 13, the multimedia information playing method may include the following steps.
In step S01, the historical behavior information of the target object with respect to the second multimedia information is acquired.
In some embodiments, the target object may give its own stress response through behavior information when watching the target video or listening to the target audio. For example, if a target object suddenly changes its sight line and is accompanied by screaming when seeing a certain video interface, the target object may be considered to be sensitive and not suitable for some content of the interface; for another example, if a target object suddenly hears a certain sound before covering its ear, the target object may be considered sensitive to some content in the sound.
In some embodiments, the historical behavior information of the target object for the second multimedia information may be determined based on the target object's reaction to viewing or listening to the multimedia information over a past period of time.
In some embodiments, if the second multimedia information is a target video, the target object may be presented with a cue such as "below video may cause discomfort, advising you to complete the stimulus grading test first" as shown in fig. 14 before the target object views the target video. If the target object clicks the "start test" button, a test video may be played to the target object to collect historical behavior information of the target object. During test video viewing, the target object may exit the test video viewing by clicking an "exit" button through the interface as shown in fig. 15.
In step S02, a target representation of the target object is determined according to the historical behavior information, wherein the target representation includes multimedia acceptance information of the target object.
In some embodiments, a representation of the target subject may be determined based on the stress response of the target subject.
In some embodiments, the number of times of stress response of the target object to the second multimedia information, stress response behavior (e.g., screaming, etc.), and the stimulation degree of the second multimedia information may be counted, and the multimedia acceptance degree information of the target object may be determined.
For example, assuming that the degree of stimulation of the second multimedia information is mild stimulation, and the target object gives a stress a plurality of times while viewing or listening to the second multimedia information, and each stress is extremely severe, the degree of multimedia reception of the target object may be stimulation avoidance.
In step S03, the target multimedia information is played to the target object according to the multimedia acceptance information of the target object.
In some embodiments, how to play the first multimedia information to the target object may be determined according to the stimulation degree of the first multimedia information and the multimedia acceptance degree information of the target object.
In some embodiments, before playing the first multimedia information, a prompt may be played to the target object according to the stimulation level of the first multimedia information and the media receiving level information of the target object.
For example, assuming that the stimulation level of the first multimedia information is extreme stimulation, and the multimedia acceptance level of the target object is only mild preference stimulation, the target object may be prompted when playing the first multimedia information: "the XX content is extremely irritating and may not be suitable for you, please consider whether to continue", or play the content of the part of the first multimedia information after the prompt is completed so that the target object decides whether to continue; for another example, assuming that the stimulation degree of the first multimedia information is mild stimulation, and the multimedia acceptance degree of the target object is severe preference stimulation, the target object may be prompted to "the XX content is not enough stimulation and may not be suitable for you, please consider whether to select other XX" when playing the first multimedia information; for example, if the stimulation level of the first multimedia message is heavy stimulation and the multimedia receiving level of the target object is heavy preference stimulation, the target object may be prompted with a prompt such as "XX content is extremely stimulated to meet your demand and let us feel a heartbeat together" when playing the first multimedia message.
According to the technical scheme provided by the embodiment, the acceptance degree information of the target object to the multimedia information is accurately determined through analyzing the historical behavior information of the target object, and the acceptance degree of the target object is fully considered when the second multimedia information is played to the target object according to the multimedia acceptance degree information of the target object. The technical scheme provided by the embodiment can cover different requirements of different objects.
Fig. 16 is a flowchart of step S02 in fig. 13 in an exemplary embodiment. Referring to fig. 16, the above-mentioned step S02 may include the following steps.
In some embodiments, the second multimedia information may be target video information, the target image of the target object may include a sensitive scene of the target object and a first target sensitive element, and the historical behavior information of the target object for the second multimedia information may include stress response information of the target object, first time information corresponding to the stress response information, and eye movement information of the target object.
The sensitive scene may refer to a "killer scene", "fighting scene", "cemetery scene", and the like, and the first target sensitive element may refer to, for example, a ghost element, a zombie element, a bloody smell element, and the like, which is not limited by the present disclosure.
In step S021, a target frame image is determined in the target video according to the stress response information of the target object and the first time information corresponding to the stress response information.
In some embodiments, the target frame image in the target video may be determined according to a first time at which the target object is stressful.
In step S022, a sensitive scene of the target object is determined from the target frame image.
In some embodiments, the target frame image may be processed through a neural network model trained in advance to determine a sensitive scene to which a target object included in the target frame image is sensitive; the target frame image may also be processed by an image processing technique to determine a sensitive scene in which the target object included in the target frame image is sensitive, which is not limited by the present disclosure.
The target neural network can be trained through an image to be trained of a known scene, so that the scene neural network in the image can be identified through training in advance, the target neural network can be a convolutional neural network or a cyclic neural network, and the method is not limited by the disclosure.
According to the embodiment of the disclosure, the target frame image can be determined in the target video through the stress response information of the target object and the corresponding first time information of the target object, and the scene in the target frame image can be determined through an image processing technology (or a neural network technology), so that the understanding of the sensitive scene of the target object is facilitated.
In some embodiments, the first target sensitive element of the target object may also be determined in the target frame image according to eye movement information of the target object.
In some embodiments, the eye movement information of the target object in the first time period may be tracked by an eye movement tracking technique to determine an area where the eyeball location position of the target object changes before and after the stress response of the target object occurs. For example, before the stress reaction occurs, the target object is tracked in the area a, and after the stress reaction occurs, the tracked area of the target object suddenly and rapidly turns from the area a to another area, and then the element in the area a corresponding to the first time can be considered as the first target sensitive element.
Among them, the eye tracking technology is a process of measuring the operation of the eye. The most interesting event for eye tracking studies is to determine where a human or animal looks (e.g., the "point of regard" or "point of fixation"). More precisely, the image processing technology is carried out through instrument equipment, the pupil position is positioned, the coordinates are obtained, and the eye fixation or staring point is calculated through a certain algorithm, so that a computer knows where and when you are looking.
In some embodiments, since the target video is multimedia information composed of images and sounds, the first target sensitive element of the target object may be present in the audio. Therefore, when the eye tracking technology does not track the first target sensitive element, the audio segment corresponding to the first time information corresponding to the stress response information of the target object can be used as the first target sensitive element.
According to the embodiment of the disclosure, the target frame image can be determined in the target video according to the stress response information of the target object and the corresponding first time information of the target object, then the first target sensitive element of the target object can be accurately determined in the target frame image through the eye-tracking technology, and the first target sensitive element can be determined in the audio information of the target video according to the stress response information and the corresponding first time information of the stress response information.
FIG. 17 is a flowchart of step S022 of FIG. 16 in an exemplary embodiment.
In some embodiments, the eye movement information of the target object may include position information and movement information of eyeballs of the target object.
Referring to fig. 17, the above step S022 may include the following steps.
In step S0221, a gaze region of the target object is determined in the target frame image based on the positional information of the eyeballs.
In step S0222, a first target sensitive element of the target object is determined according to the gaze area of the target object and the movement information of the eyeball.
According to the embodiment of the disclosure, the target frame image can be determined in the target video according to the stress response information of the target object and the corresponding first time information of the target object, then the first target sensitive element of the target object can be accurately determined in the target frame image through the eye-tracking technology, and the first target sensitive element can be determined in the audio information of the target video according to the stress response information and the corresponding first time information of the stress response information.
In some embodiments, the second multimedia information may be target audio information, the target image of the target object may further include a second target sensitive element of the target object, and the historical behavior information of the target object with respect to the historical multimedia information may include stress response information of the target object and second time information corresponding to the stress response information.
In some embodiments, a second target sensitive element of the target object may be determined in the target audio according to the stress response information of the target object and second time information corresponding to the stress response information.
FIG. 18 is a flowchart of step S03 of FIG. 13 in an exemplary embodiment. Referring to fig. 18, the above-mentioned step S03 may include the following steps.
Referring to fig. 18, the above-mentioned step S03 may include the following steps.
In step S031, the target multimedia information is obtained, where the target multimedia information includes target content.
In step S032, according to the target content of the target multimedia information and the multimedia acceptance information of the target object, the target multimedia information is played to the target object.
In some embodiments, how to play the target multimedia information to the target object may be determined according to the stimulation degree of the target multimedia information and the multimedia acceptance degree information of the target object.
In some embodiments, the target object may be played with a prompt according to the stimulation level of the target multimedia information and the media reception level information of the target object before playing the target multimedia information.
For example, assuming that the stimulation level of the first multimedia information is extreme stimulation, and the multimedia acceptance level of the target object is only mild preference stimulation, the target object may be prompted when playing the first multimedia information: "the XX content is extremely irritating and may not be suitable for you, please consider whether to continue", or play the content of the part of the first multimedia information after the prompt is completed so that the target object decides whether to continue; for another example, assuming that the stimulation degree of the first multimedia information is mild stimulation, and the multimedia acceptance degree of the target object is severe preference stimulation, the target object may be prompted to "the XX content is not enough stimulation and may not be suitable for you, please consider whether to select other XX" when playing the first multimedia information; for example, if the stimulation level of the first multimedia message is heavy stimulation and the multimedia receiving level of the target object is heavy preference stimulation, the target object may be prompted with a prompt such as "XX content is extremely stimulated to meet your demand and let us feel a heartbeat together" when playing the first multimedia message.
In some embodiments, when the target multimedia information is played, desensitization processing may be performed on the sensitive element of the target object, replacement processing may be performed on the sensitive element of the target object, and pause processing may be performed when the sensitive element of the target object is played, which is not limited by the present disclosure.
According to the multimedia information playing method provided by the embodiment of the disclosure, the multimedia acceptance degree information of the target object is determined according to the historical behavior information of the target object aiming at the historical multimedia information, and the first multimedia information is played to the target object according to the multimedia acceptance degree information of the target object. According to the technical scheme provided by the embodiment, the acceptance degree of the target object to the multimedia is finely graded according to the historical behavior of the target object, the requirements of different target objects are met, and the user experience is improved.
Fig. 19 illustrates a multimedia playback method according to an exemplary embodiment. As shown in fig. 19, the multimedia playing method involves a plurality of execution subjects, such as a user, a client, and a server background.
Referring to fig. 19, the multimedia playing method may include the following steps.
And S001, selecting the film through a client interface by the user.
In step S002, the client determines whether the current movie contains the stimulus content.
If the current movie does not include the stimulus content, performing step S003 to directly play the movie; if the current movie includes the stimulus content, step S004 is performed to prompt the user to perform a rating test.
In step S005, the user decides whether he/she is willing to accept the test.
If the user does not accept the grading test, executing step S003 to directly play the film; if the user accepts the grading test, step S006 is executed to let the user view the test piece.
And S007 to S009, when the user watches the test segment, the eye tracking system of the client tracks the eye movement information of the user, the behavior recognition system recognizes the stress behavior of the user, the voice recognition system recognizes the stress utterance of the user, and the eye movement information, the stress behavior, the stress utterance and other information of the user are sent to the server background.
And step S010, the server background analyzes the sensitive elements of the user according to the eye movement information, the stress behaviors, the stress words and other information of the user, generates a hierarchical portrait of the user, and sends a hierarchical result in the hierarchical portrait and a viewing suggestion generated according to the hierarchical result to the client.
And step S011, the server background determines the element to be coded according to the hierarchical portrait and sends a large code request to the client.
And step S012, the client performs coding processing on the element to be coded according to the coding request, and displays the grading result and the film watching suggestion to the user.
In step S013, the user watches the movie according to the rating result.
And step S014, the server background selects a proper film in the film pool to recommend according to the graded image of the user, and sends the recommendation result to the client.
In step S015, the client presents the recommendation result after receiving the movie.
And step S016, browsing the recommended movies by the user and continuing consumption.
Fig. 20 is a block diagram illustrating a multimedia information playing apparatus according to an exemplary embodiment. Referring to fig. 20, a multimedia information playing apparatus 2000 provided in an embodiment of the present disclosure may include: a target object determination module 2001, a target portrait determination module 2002, a first multimedia information acquisition module 2003, and a first playback module 2004.
Wherein the target object determination module 2001 may be configured to determine a target object. Target representation determination module 2002 may be configured to obtain target representation information for the target object, the target representation information including multimedia acceptance level information for the target object, wherein the target representation information is determined based on historical behavior information of the target object with respect to historical multimedia information. The first multimedia information obtaining module 2003 may be configured to obtain first multimedia information, where the first multimedia information includes target content. The first playing module 2004 may be configured to play the first multimedia information according to the target content of the first multimedia information and the multimedia receptivity information of the target object.
In some embodiments, the first playing module 2004 may include: the device comprises a first prompt information determining unit and a first prompt information display unit.
The first prompt information determining unit may be configured to determine the first prompt information according to the target content of the first multimedia information and the multimedia acceptance level information of the target object. The first prompt information presentation unit may be configured to present the first prompt information before playing the first multimedia information.
In some embodiments, the target representation information further includes a first sensitive element of the target object
In some embodiments, the first playing module 2004 may include: a desensitizing unit.
In fact, the desensitization unit may be configured to perform desensitization processing on a first sensitive element of the target object when the first multimedia information is played if the first multimedia information includes the first sensitive element.
In some embodiments, the target representation information includes a second sensitive element of the target object.
In some embodiments, the first playing module 2004 may include: the device comprises a pause unit, a second prompt information acquisition unit and a second prompt information display unit.
The pause unit may be configured to pause the playing of the first multimedia information before playing the second sensitive element if the first multimedia information includes the second sensitive element of the target object. The second prompt information acquisition unit may be configured to acquire second prompt information. The second prompt information presentation unit may be configured to present the second prompt information.
In some embodiments, the target representation information further includes a third sensitive element of the target object.
In some embodiments, the first playing module 2004 may include: a substitute multimedia information acquisition unit and a substitute unit.
Wherein the alternative multimedia information obtaining unit may be configured to obtain alternative multimedia information if the first multimedia information includes a third sensitive element of the target object. The replacing unit may be configured to replace the playing of the segment in which the third sensitive element is located with the replacing multimedia information.
In some embodiments, the multimedia information playing apparatus 2000 may further include: and a recommendation module.
Wherein the recommendation module may be configured to recommend multimedia information to the target object based on the target representation information of the target object.
Since each functional module of the multimedia information playing apparatus 2000 according to the exemplary embodiment of the present disclosure corresponds to the steps of the exemplary embodiment of the multimedia information playing method, it is not described herein again.
Fig. 21 is a block diagram illustrating a multimedia information playing apparatus according to an exemplary embodiment. Referring to fig. 21, a multimedia information playing apparatus 2100 provided in an embodiment of the present disclosure may include: historical behavior determination information module 2101, target representation determination module 2102, and second playback module 2103.
The historical behavior determination information module 2101 may be configured to obtain historical behavior information of the target object viewing the second multimedia information. Target representation determination module 2102 may be configured to determine a target representation for the target object based on the historical behavior information, the target representation including multimedia acceptance information for the target object. The second playing module 2103 may be configured to play the target multimedia information to the target object according to the multimedia acceptance information of the target object.
In some embodiments, the second multimedia information is target video information, the target portrait further includes a sensitive scene of the target object, and the historical behavior information includes stress response information of the target object and first time information corresponding to the stress response information.
In some embodiments, target representation determination module 2102 may include: a target frame image determining unit and a sensitive scene determining unit.
The target frame image determining unit may be configured to determine a target frame image in the target video according to the stress response information of the target object and the first time information corresponding to the stress response information. The sensitive scene determination unit may be configured to determine a sensitive scene of the target object from the target frame image.
In some embodiments, the target representation further includes a first target sensitive element of the target object, and the historical behavior further information includes eye movement information of the target object.
In some embodiments, target representation determination module 2102 may include: a first target sensitive element determination unit. Wherein the first target sensitive element determining unit may be configured to determine the first target sensitive element of the target object in the target frame image according to eye movement information of the target object.
In some embodiments, the eye movement information includes position information and movement information of an eyeball of the target object.
In some embodiments, the first target sensitive element determining unit may include: a line-of-sight region determination subunit and a first target sensitive element determination subunit.
Wherein the gaze region determination subunit may be configured to determine a gaze region of the target object in the target frame image according to the position information of the eyeball. The first target sensitive element determining subunit may be configured to determine the first target sensitive element of the target object according to the gaze area of the target object and the movement information of the eyeball.
In some embodiments, the second multimedia information is target audio information, the target representation further includes a second target sensitive element of the target object, and the historical behavior information includes stress response information of the target object and second time information corresponding to the stress response information.
In some embodiments, target representation determination module 2102 may include: and a second target sensitive element determining unit.
The second target sensitive element determining unit may be configured to determine a second target sensitive element of the target object in the target audio according to the stress response information of the target object and second time information corresponding to the stress response information.
In some embodiments, the second playing module 2103 may include: a target multimedia information acquisition unit and a third playing unit.
Wherein the target multimedia information obtaining unit may be configured to obtain the target multimedia information, and the target multimedia information includes target content. The third playing unit may be configured to play the target multimedia information to the target object according to target content of the target multimedia information and multimedia receptivity information of the target object.
Since each functional module of the multimedia information playing apparatus 2100 according to the exemplary embodiment of the present disclosure corresponds to the steps of the exemplary embodiment of the multimedia information playing method described above, it is not described herein again.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution of the embodiment of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computing device (which may be a personal computer, a server, a mobile terminal, or a smart device, etc.) to execute the method according to the embodiment of the present disclosure, such as one or more of the steps shown in fig. 3.
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the disclosure is not limited to the details of construction, the arrangements of the drawings, or the manner of implementation that have been set forth herein, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (15)

1. A method for playing multimedia information, comprising:
determining a target object;
acquiring target portrait information of the target object, wherein the target portrait information comprises multimedia acceptance degree information of the target object, and the target portrait information is determined according to historical behavior information of the target object for historical multimedia information;
acquiring first multimedia information, wherein the first multimedia information comprises target content;
and playing the first multimedia information according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object.
2. The method of claim 1, wherein playing the first multimedia message according to the target content of the first multimedia message and the multimedia receptivity information of the target object comprises:
according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object, determining first prompt information:
and displaying the first prompt message before playing the first multimedia message.
3. The method of claim 1, wherein the target representation information further includes a first sensitive element of the target object; wherein, playing the first multimedia information according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object comprises:
and if the first multimedia information comprises a first sensitive element of the target object, carrying out desensitization processing on the first sensitive element when the first multimedia information is played.
4. The method of claim 1, wherein the target representation information includes a second sensitive element of the target object; wherein, playing the first multimedia information according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object comprises:
if the first multimedia information comprises a second sensitive element of the target object, pausing the playing of the first multimedia information before playing the second sensitive element;
and displaying the second prompt message.
5. The method of claim 1, wherein the target representation information further includes a third sensitive element of the target object; wherein, playing the first multimedia information according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object comprises:
if the first multimedia information comprises a third sensitive element of the target object, acquiring alternative multimedia information;
and replacing the playing of the segment where the third sensitive element is located by the replacing multimedia information.
6. The method of claim 1, further comprising:
and recommending multimedia information to the target object according to the target portrait information of the target object.
7. A method for playing multimedia information, comprising:
acquiring historical behavior information of the target object aiming at the second multimedia information;
determining a target portrait of the target object according to the historical behavior information, wherein the target portrait comprises multimedia acceptance degree information of the target object;
and playing the target multimedia information to the target object according to the multimedia acceptance degree information of the target object.
8. The method of claim 7, wherein the second multimedia information is target video information, the target representation further includes a sensitive scene of the target object, and the historical behavior information includes stress response information of the target object and first time information corresponding to the stress response information; wherein determining a target representation of the target object based on the historical behavior information comprises:
determining a target frame image in the target video according to the stress response information of the target object and first time information corresponding to the stress response information;
and determining the sensitive scene of the target object according to the target frame image.
9. The method of claim 8, wherein the target representation further comprises a first target sensitive element of the target object, and wherein the historical behavior further information comprises eye movement information of the target object; wherein determining a target representation of the target object based on the historical behavior information comprises:
and determining a first target sensitive element of the target object in the target frame image according to the eye movement information of the target object.
10. The method according to claim 9, wherein the eye movement information includes position information and movement information of an eyeball of the target object; wherein determining a first target sensitive element of the target object in the target frame image according to the eye movement information of the target object comprises:
determining a sight line area of the target object in the target frame image according to the position information of the eyeballs;
and determining a first target sensitive element of the target object according to the sight line area of the target object and the movement information of the eyeballs.
11. The method of claim 7, wherein the second multimedia information is target audio information, the target representation further includes a second target sensitive element of the target object, and the historical behavior information includes stress response information of the target object and second time information corresponding to the stress response information; wherein determining a target representation of the target object based on the historical behavior information comprises:
and determining a second target sensitive element of the target object in the target audio according to the stress response information of the target object and second time information corresponding to the stress response information.
12. A multimedia information playing apparatus, comprising:
a target object determination module configured to determine a target object;
a target representation determination module configured to obtain target representation information for the target object, the target representation information including multimedia acceptance level information for the target object, wherein the target representation information is determined based on historical behavior information of the target object with respect to historical multimedia information;
the system comprises a first multimedia information acquisition module, a second multimedia information acquisition module and a first display module, wherein the first multimedia information acquisition module is configured to acquire first multimedia information which comprises target content;
and the first playing module is configured to play the first multimedia information according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object.
13. A multimedia information playing apparatus, comprising:
the historical behavior determining information module is configured to acquire historical behavior information of a target object watching second multimedia information;
a target representation determination module configured to determine a target representation of the target object based on the historical behavior information, the target representation including multimedia acceptance level information for the target object;
and the second playing module is configured to play the target multimedia information to the target object according to the multimedia acceptance degree information of the target object.
14. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-11.
15. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1-11.
CN202010600706.7A 2020-06-28 2020-06-28 Multimedia information playing method and device, electronic equipment and storage medium Active CN111654752B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010600706.7A CN111654752B (en) 2020-06-28 2020-06-28 Multimedia information playing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010600706.7A CN111654752B (en) 2020-06-28 2020-06-28 Multimedia information playing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111654752A true CN111654752A (en) 2020-09-11
CN111654752B CN111654752B (en) 2024-03-26

Family

ID=72348496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010600706.7A Active CN111654752B (en) 2020-06-28 2020-06-28 Multimedia information playing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111654752B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086772A (en) * 2022-06-10 2022-09-20 咪咕互动娱乐有限公司 Video desensitization method, device, equipment and storage medium
CN115225967A (en) * 2022-06-24 2022-10-21 网易(杭州)网络有限公司 Video processing method, video processing device, storage medium and computer equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105159990A (en) * 2015-08-31 2015-12-16 北京奇艺世纪科技有限公司 Method and device for hierarchical control of media data
CN105611412A (en) * 2015-12-22 2016-05-25 小米科技有限责任公司 Video file playing method, video clip determining method and device
CN107995523A (en) * 2017-12-21 2018-05-04 广东欧珀移动通信有限公司 Video broadcasting method, device, terminal and storage medium
CN108063979A (en) * 2017-12-26 2018-05-22 深圳Tcl新技术有限公司 Video playing control method, device and computer readable storage medium
CN108966011A (en) * 2018-07-13 2018-12-07 北京七鑫易维信息技术有限公司 A kind of control method for playing back, device, terminal device and storage medium
CN110225398A (en) * 2019-05-28 2019-09-10 腾讯科技(深圳)有限公司 Multimedia object playback method, device and equipment and computer storage medium
CN110909242A (en) * 2019-11-27 2020-03-24 北京奇艺世纪科技有限公司 Data pushing method, device, server and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105159990A (en) * 2015-08-31 2015-12-16 北京奇艺世纪科技有限公司 Method and device for hierarchical control of media data
CN105611412A (en) * 2015-12-22 2016-05-25 小米科技有限责任公司 Video file playing method, video clip determining method and device
CN107995523A (en) * 2017-12-21 2018-05-04 广东欧珀移动通信有限公司 Video broadcasting method, device, terminal and storage medium
CN108063979A (en) * 2017-12-26 2018-05-22 深圳Tcl新技术有限公司 Video playing control method, device and computer readable storage medium
CN108966011A (en) * 2018-07-13 2018-12-07 北京七鑫易维信息技术有限公司 A kind of control method for playing back, device, terminal device and storage medium
CN110225398A (en) * 2019-05-28 2019-09-10 腾讯科技(深圳)有限公司 Multimedia object playback method, device and equipment and computer storage medium
CN110909242A (en) * 2019-11-27 2020-03-24 北京奇艺世纪科技有限公司 Data pushing method, device, server and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086772A (en) * 2022-06-10 2022-09-20 咪咕互动娱乐有限公司 Video desensitization method, device, equipment and storage medium
CN115086772B (en) * 2022-06-10 2023-09-05 咪咕互动娱乐有限公司 Video desensitization method, device, equipment and storage medium
CN115225967A (en) * 2022-06-24 2022-10-21 网易(杭州)网络有限公司 Video processing method, video processing device, storage medium and computer equipment

Also Published As

Publication number Publication date
CN111654752B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
US20200228359A1 (en) Live streaming analytics within a shared digital environment
US20190172458A1 (en) Speech analysis for cross-language mental state identification
US20190378494A1 (en) Method and apparatus for outputting information
US10834456B2 (en) Intelligent masking of non-verbal cues during a video communication
CN116484318B (en) Lecture training feedback method, lecture training feedback device and storage medium
CN113569892A (en) Image description information generation method and device, computer equipment and storage medium
CN108900908A (en) Video broadcasting method and device
US20180109828A1 (en) Methods and systems for media experience data exchange
US11803579B2 (en) Apparatus, systems and methods for providing conversational assistance
WO2020148920A1 (en) Information processing device, information processing method, and information processing program
CN111654752B (en) Multimedia information playing method and device, electronic equipment and storage medium
CN110767005A (en) Data processing method and system based on intelligent equipment special for children
CN114090862A (en) Information processing method and device and electronic equipment
US20240087361A1 (en) Using projected spots to determine facial micromovements
CN111723758A (en) Video information processing method and device, electronic equipment and storage medium
KR102325506B1 (en) Virtual reality-based communication improvement system and method
CN108563322B (en) Control method and device of VR/AR equipment
Heck Presentation adaptation for multimodal interface systems: three essays on the effectiveness of user-centric content and modality adaptation
US20240073219A1 (en) Using pattern analysis to provide continuous authentication
US20220408153A1 (en) Information processing device, information processing method, and information processing program
US20240164677A1 (en) Attention detection
Mäkivirta On Human Perceptual Bandwidth and Slow Listening
Pandey Lip Reading as an Active Mode of Interaction with Computer Systems
WO2022200815A1 (en) Video content item selection
CN116847112A (en) Live broadcast all-in-one machine, virtual main broadcast live broadcast method and related devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant