CN111654752B - Multimedia information playing method and device, electronic equipment and storage medium - Google Patents

Multimedia information playing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111654752B
CN111654752B CN202010600706.7A CN202010600706A CN111654752B CN 111654752 B CN111654752 B CN 111654752B CN 202010600706 A CN202010600706 A CN 202010600706A CN 111654752 B CN111654752 B CN 111654752B
Authority
CN
China
Prior art keywords
information
target
target object
multimedia
sensitive element
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010600706.7A
Other languages
Chinese (zh)
Other versions
CN111654752A (en
Inventor
黄杰怡
黄其亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010600706.7A priority Critical patent/CN111654752B/en
Publication of CN111654752A publication Critical patent/CN111654752A/en
Application granted granted Critical
Publication of CN111654752B publication Critical patent/CN111654752B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4627Rights management associated to the content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4667Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies

Abstract

The present disclosure provides a multimedia information playing method, apparatus, electronic device and computer readable storage medium, the method comprising: determining a target object; acquiring target portrait information of the target object, wherein the target portrait information comprises multimedia acceptance degree information of the target object, and the target portrait information is determined according to historical behavior information of the target object aiming at historical multimedia information; acquiring first multimedia information, wherein the first multimedia information comprises target content; and playing the first multimedia information according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object. According to the technical scheme provided by the embodiment of the disclosure, the target object can be subjected to fine multimedia acceptance degree grading, and target multimedia thinking can be played to the target object according to the fine grading, so that preference requirements of different objects are effectively considered.

Description

Multimedia information playing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of artificial intelligence, and in particular relates to a multimedia information playing method, a device and related equipment.
Background
Viewing horror tablets can meet the psychological needs of people pursuing stimulation, but overstimulating content can cause psychological and physiological discomfort to people.
In the related art, different target objects are generally classified according to their viewing levels based on age attributes, so as to determine whether the target object is suitable for viewing the target horror film according to the viewing level of the target object.
However, the operation of viewing and grading the target object based on the age attribute is too coarse, and the acceptance of the stimulation content by the adult object is greatly different, so that the related technology cannot effectively cover the requirements of the objects with different stimulation preferences.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The embodiment of the disclosure provides a multimedia information playing method and device, electronic equipment and a computer readable storage medium, which can play target multimedia to a target object according to the multimedia acceptance degree of the target object, and effectively meet the requirements of different preference objects.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
The embodiment of the disclosure provides a multimedia information playing method, which comprises the following steps: determining a target object; acquiring target portrait information of the target object, wherein the target portrait information comprises multimedia acceptance degree information of the target object, and the target portrait information is determined according to historical behavior information of the target object aiming at historical multimedia information; acquiring first multimedia information, wherein the first multimedia information comprises target content; and playing the first multimedia information according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object.
The embodiment of the disclosure provides a multimedia information playing method, which comprises the following steps: acquiring historical behavior information of a target object aiming at second multimedia information; determining a target portrait of the target object according to the historical behavior information, wherein the target portrait comprises multimedia acceptance degree information of the target object; and playing the target multimedia information to the target object according to the multimedia acceptance degree information of the target object.
The embodiment of the disclosure provides a multimedia information playing device, which comprises: the system comprises a target object determining module, a target portrait determining module, a first multimedia information obtaining module and a first playing module.
Wherein the target object determination module may be configured to determine a target object. The target representation determination module may be configured to obtain target representation information of the target object, the target representation information comprising multimedia acceptance information of the target object, wherein the target representation information is determined based on historical behavior information of the target object for historical multimedia information. The first multimedia information acquisition module may be configured to acquire first multimedia information including target content. The first playing module may be configured to play the first multimedia information according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object.
In some embodiments, the first playing module may include: the device comprises a first prompt information determining unit and a first prompt information displaying unit.
The first prompt information determining unit may be configured to determine the first prompt information according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object. The first cue information display unit may be configured to display the first cue information before playing the first multimedia information.
In some embodiments, the target representation information further includes a first sensitive element of the target object
In some embodiments, the first playing module may include: a desensitizing unit.
In fact, the desensitizing unit may be configured to desensitize the first sensitive element when playing the first multimedia information if the first multimedia information includes the first sensitive element of the target object.
In some embodiments, the target representation information includes a second sensitive element of the target object.
In some embodiments, the first playing module may include: a pause unit and a second prompt information display unit.
The pause unit may be configured to pause the playing of the first multimedia information before playing the second sensitive element if the first multimedia information includes the second sensitive element of the target object. The second hint information presentation unit may be configured to present the second hint information.
In some embodiments, the target representation information further includes a third sensitive element of the target object.
In some embodiments, the first playing module may include: a substitute multimedia information acquisition unit and a substitute unit.
Wherein the substitute multimedia information obtaining unit may be configured to obtain the substitute multimedia information if the first multimedia information includes a third sensitive element of the target object. The replacing unit may be configured to replace the playing of the segment where the third sensitive element is located by the replacing multimedia information.
In some embodiments, the multimedia information playing device may further include: and a recommendation module.
Wherein the recommendation module may be configured to recommend multimedia information to the target object according to target portrait information of the target object.
Since each functional module of the multimedia information playing device according to the exemplary embodiment of the present disclosure corresponds to the steps of the exemplary embodiment of the multimedia information playing method described above, a detailed description thereof will be omitted herein.
The embodiment of the disclosure provides a multimedia information playing device, which comprises: the system comprises a historical behavior determination information module, a target portrait determination module and a second playing module.
Wherein the historical behavior determination information module may be configured to obtain historical behavior information of the target object for the second multimedia information. The target representation determination module may be configured to determine a target representation of the target object based on the historical behavior information, the target representation including multimedia acceptance information for the target object. The second playing module may be configured to play the target multimedia information to the target object according to the multimedia acceptance degree information of the target object.
In some embodiments, the second multimedia information is target video information, the target representation further includes a sensitive scene of the target object, and the historical behavior information includes: the stress response information of the target object and the first time information corresponding to the stress response information.
In some embodiments, the target representation determination module may include: a target frame image determining unit and a sensitive scene determining unit.
The target frame image determining unit may be configured to determine a target frame image in the target video according to stress response information of the target object and first time information corresponding to the stress response information. The sensitive scene determination module may be configured to determine a sensitive scene of the target object from the target frame image.
In some embodiments, the target representation further comprises a first target sensitive element of the target object, and the historical behavior information further comprises eye movement information of the target object.
In some embodiments, the target representation determination module may include a first target sensitive element determination unit.
Wherein the first target sensitive element determining unit may be configured to determine a first target sensitive element of the target object in the target frame image according to eye movement information of the target object.
In some embodiments, the eye movement information includes position information and movement information of an eyeball of the target object.
In some embodiments, the first target sensitive element determination unit may include: the line-of-sight region determination subunit and the first object-sensitive element determination subunit.
Wherein the line-of-sight region determination subunit may be configured to determine the line-of-sight region of the target object in the target frame image according to the positional information of the eyeball. The first target sensitive element determination subunit may be configured to determine a first target sensitive element of the target object according to a line-of-sight area of the target object and movement information of the eyeball.
In some embodiments, the second multimedia information is target audio information, the target portrait further includes a second target sensitive element of the target object, and the historical behavior information includes stress response information of the target object and second time information corresponding to the stress response information.
In some embodiments, the target representation determination module may include: and a second target sensitive element determining unit.
The second target sensitive element determining unit may be configured to determine a second target sensitive element of the target object in the target audio according to stress response information of the target object and second time information corresponding to the stress response information.
The embodiment of the disclosure provides an electronic device, which comprises: one or more processors; and a storage device for storing one or more programs, which when executed by the one or more processors, cause the one or more processors to implement the method for playing multimedia information as described in any one of the above.
The embodiment of the disclosure proposes a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements a multimedia information playing method as set forth in any one of the above.
Embodiments of the present disclosure propose a computer program product or a computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the multimedia information playing method as described in any one of the above.
According to the multimedia information playing method, the multimedia information playing device, the electronic equipment and the computer readable storage medium, the multimedia acceptance degree information of the target object is determined according to the historical behavior information of the target object aiming at the historical multimedia information, and the first multimedia information is played to the target object according to the multimedia acceptance degree information of the target object. According to the technical scheme provided by the embodiment, the receiving degree of the target object on the multimedia is finely graded according to the historical behaviors of the target object, the requirements of different target objects are covered, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. The drawings described below are merely examples of the present disclosure and other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 shows a schematic diagram of an exemplary system architecture of a multimedia information playing method or a multimedia information playing apparatus applied to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram illustrating a structure of a computer system applied to a multimedia information playing device according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating a multimedia information playing method according to an exemplary embodiment.
Fig. 4 is a flowchart illustrating a multimedia information playing method according to an exemplary embodiment.
Fig. 5 is a schematic diagram showing a first hint information according to an exemplary embodiment.
Fig. 6 is a flowchart illustrating a multimedia information playing method according to an exemplary embodiment.
Fig. 7 is a schematic diagram illustrating a coding scheme according to an exemplary embodiment.
Fig. 8 is a flowchart illustrating a multimedia information playing method according to an exemplary embodiment.
Fig. 9 illustrates a second hint information presentation schematic according to an exemplary embodiment.
Fig. 10 is a flowchart illustrating a multimedia information playing method according to an exemplary embodiment.
Fig. 11 is a flowchart illustrating a multimedia information playing method according to an exemplary embodiment.
FIG. 12 is a diagram illustrating a movie recommendation interface, according to an example embodiment.
Fig. 13 is a flowchart illustrating a multimedia information playing method according to an exemplary embodiment.
Fig. 14 is a schematic diagram illustrating a multimedia play according to an exemplary embodiment.
Fig. 15 is a schematic diagram illustrating a multimedia play according to an exemplary embodiment.
Fig. 16 is a flowchart of step S02 of fig. 13 in an exemplary embodiment.
Fig. 17 is a flowchart of step S022 in fig. 16 in an exemplary embodiment.
Fig. 18 is a flowchart of step S03 of fig. 13 in an exemplary embodiment.
Fig. 19 is a diagram illustrating a multimedia playing method according to an exemplary embodiment.
Fig. 20 is a block diagram illustrating a multimedia information playing apparatus according to an exemplary embodiment.
Fig. 21 is a block diagram illustrating a multimedia information playing apparatus according to an exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted.
The described features, structures, or characteristics of the disclosure may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. However, those skilled in the art will recognize that the aspects of the present disclosure may be practiced with one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The drawings are merely schematic illustrations of the present disclosure, in which like reference numerals denote like or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and not necessarily all of the elements or steps are included or performed in the order described. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
In the present specification, the terms "a," "an," "the," "said" and "at least one" are used to indicate the presence of one or more elements/components/etc.; the terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements/components/etc., in addition to the listed elements/components/etc.; the terms "first," "second," and "third," etc. are used merely as labels, and do not limit the number of their objects.
The following describes example embodiments of the present disclosure in detail with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system architecture of a multimedia information playing method or a multimedia information playing apparatus, which may be applied to an embodiment of the present disclosure.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to accept or send messages or the like. The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop computers, desktop computers, wearable devices, virtual reality devices, smart homes, etc.
The terminal device may, for example, determine a target object; the terminal device may, for example, acquire target portrait information of the target object, the target portrait information including multimedia acceptance degree information of the target object, wherein the target portrait information is determined according to historical behavior information of the target object for historical multimedia information; the terminal device may, for example, acquire first multimedia information including target content; the terminal device may play the first multimedia information, for example, according to the target content of the first multimedia information and the multimedia acceptance information of the target object.
The server 105 may be a server providing various services, such as a background management server providing support for devices operated by users with the terminal devices 101, 102, 103. The background management server can analyze and process the received data such as the request and the like, and feed back the processing result to the terminal equipment.
The server 105 may, for example, obtain historical behavior information of the target object for the second multimedia information; the server 105 may determine a target representation of the target object, e.g. based on the historical behavior information, the target representation comprising multimedia acceptance information of the target object; the server 105 may play the target multimedia information to the target object, for example, according to the multimedia acceptance information of the target object.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative, and that the server 105 may be a server of one entity, or may be composed of a plurality of servers, and may have any number of terminal devices, networks and servers according to actual needs.
Referring now to FIG. 2, a schematic diagram of a computer system 200 suitable for use in implementing the terminal device of an embodiment of the present application is shown. The terminal device shown in fig. 2 is only an example, and should not impose any limitation on the functions and the scope of use of the embodiments of the present application.
As shown in fig. 2, the computer system 200 includes a Central Processing Unit (CPU) 201, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 202 or a program loaded from a storage section 208 into a Random Access Memory (RAM) 203. In the RAM 203, various programs and data required for the operation of the system 200 are also stored. The CPU 201, ROM 202, and RAM 203 are connected to each other through a bus 204. An input/output (I/O) interface 205 is also connected to bus 204.
The following components are connected to the I/O interface 205: an input section 206 including a keyboard, a mouse, and the like; an output portion 207 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage section 208 including a hard disk or the like; and a communication section 209 including a network interface card such as a LAN card, a modem, and the like. The communication section 209 performs communication processing via a network such as the internet. The drive 210 is also connected to the I/O interface 205 as needed. A removable medium 211 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed on the drive 210 as needed, so that a computer program read therefrom is installed into the storage section 208 as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 209, and/or installed from the removable medium 211. The above-described functions defined in the system of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 201.
It should be noted that the computer readable storage medium shown in the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules and/or units and/or sub-units referred to in the embodiments of the present application may be implemented in software or in hardware. The described modules and/or units and/or sub-units may also be provided in a processor, e.g. may be described as: a processor includes a transmitting unit, an acquiring unit, a determining unit, and a first processing unit. Wherein the names of the modules and/or units and/or sub-units do not in some cases constitute a limitation of the modules and/or units and/or sub-units themselves.
As another aspect, the present application also provides a computer-readable storage medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer-readable storage medium carries one or more programs which, when executed by a device, cause the device to perform functions including: determining a target object; acquiring target portrait information of the target object, wherein the target portrait information comprises multimedia acceptance degree information of the target object, and the target portrait information is determined according to historical behavior information of the target object aiming at historical multimedia information; acquiring first multimedia information, wherein the first multimedia information comprises target content; and playing the first multimedia information according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object.
The technical scheme provided by the embodiment of the disclosure can also relate to the technologies of computer vision, voice, machine learning and the like in the artificial intelligence technology.
Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Computer Vision (CV) is a science of studying how to "look" a machine, and more specifically, to replace human eyes with a camera and a Computer to perform machine Vision such as recognition, tracking and measurement on a target, and further perform graphic processing to make the Computer process into an image more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, and map construction, among others, as well as common biometric recognition techniques such as face recognition, fingerprint recognition, and others.
Key technologies to speech technology (Speech Technology) are automatic speech recognition technology (Automatic Speech Recognition, ASR) and speech synthesis technology (TTS, textToSpeech) and voiceprint recognition technology. The method can enable the computer to listen, watch, say and feel, is the development direction of human-computer interaction in the future, and voice becomes one of the best human-computer interaction modes in the future.
Machine Learning (ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
Fig. 3 is a flowchart illustrating a multimedia information playing method according to an exemplary embodiment. The method provided in the embodiments of the present disclosure may be processed by any electronic device having computing processing capability, for example, the terminal devices 102 and 103 in the embodiment of fig. 1, and in the following embodiments, the terminal devices 102 and 103 are taken as an execution subject for illustration, but the present disclosure is not limited thereto.
Referring to fig. 3, the multimedia information playing method provided by the embodiment of the present disclosure may include the following steps.
In some embodiments, the multimedia information may generally include a variety of media forms such as text information, sound information, image information, or video information, which is not limited by the present disclosure.
In step S1, a target object is determined.
In some embodiments, the target object may refer to a user who needs to watch a target video, a target text, or a target picture, or may refer to a user who needs to listen to a target sound, which is not limited by the present disclosure.
In some embodiments, the target object may be determined by collecting information such as an image, sound, authentication, etc. of the target object.
In some embodiments, the image may be processed by an image detection technique to determine a target object; the acquired image information can be subjected to training of the completed neural network model to determine a target object; the collected sounds may also be processed by automatic speech recognition techniques to determine the target object.
It will be appreciated that the present disclosure is not limited to the above-described method of determining a target object.
In step S2, target portrait information of the target object is acquired, where the target portrait information includes multimedia acceptance degree information of the target object, and the target portrait information is determined according to historical behavior information of the target object for historical multimedia information.
In some embodiments, the historical multimedia information may refer to related multimedia information (e.g., video, audio, or pictures, etc.) that the target object watched or listened to over a certain period of time in the past, or may refer to test multimedia information that was played before the first multimedia information was played, which is not limited by the present disclosure.
In some embodiments, the historical behavior information of the target object for the historical multimedia information may refer to some stress response of the target object when viewing or listening to the historical multimedia information, such as screaming (e.g., high breathing "laught", "frightening me", etc.), mouth covering, abrupt line of sight transfer, abrupt pupil dilation, etc., which is not limited by the present disclosure.
For example, stress responses of a target object to view a target horror sheet within the past month may be collected to generate representation information of the target object for the horror sheet; stress response of the target object when listening to the 'ghost story' in the past three months can be collected to generate portrait information of the target object aiming at the 'ghost story'; stress response of the target object when watching the test video/audio can be collected to generate portrait information of the target with maximum target video/target audio, and the collection of the historical behavior information is not limited in the disclosure.
In some embodiments, the historical behavior information of the target object for the historical multimedia information may refer to stress response of the target object when viewing or listening to the related multimedia information and time information of the stress response, for example, mouth covering behavior of the target object when viewing the horror film and corresponding time information thereof; for example, the ear covering behavior of the target object when listening to the ghost story and the corresponding time information thereof; for example, the target object may be a screaming action and the corresponding time information.
In some embodiments, the portrait information of each object may be determined in advance by the historical behavior information of the different objects for the historical multimedia information, and then the portrait information of each object is stored in advance. When the target object is determined, target image information of the target object may be determined from the image information stored in advance.
In some embodiments, the multimedia acceptance level information in the portrait information of the target object may be a hierarchical information, for example, the multimedia acceptance level of the target object may be classified into a category of heavy preference stimulus, light preference stimulus, and stimulus avoidance, which is not limited in this disclosure.
In step S3, first multimedia information is acquired, the first multimedia information including target content.
In some embodiments, the first multimedia information may refer to multimedia information to be played, such as video, audio, or pictures to be played, which the present disclosure is not limited to.
It is understood that the target content of the first multimedia information and the stimulus level of the target content may be known in advance before the first multimedia information is played, so that the target multimedia information is played to the target object according to the target content and the multimedia tolerance level information of the target object.
In some embodiments, the target content of the first multimedia information may be subjected to a hierarchical processing in advance, for example, it may be classified into: specific stimulus, general stimulus, no stimulus, etc., which is not limited by the present disclosure.
In step S4, the first multimedia information is played according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object.
In some embodiments, it may be determined how to play the first multimedia information to the target object based on the stimulation level of the first multimedia information and the multimedia acceptance level information of the target object.
In some embodiments, before the first multimedia information is played, a prompt may be played to the target object according to the stimulation level of the first multimedia information and the media receiving level information of the target object.
For example, assuming that the stimulus level of the first multimedia information is extremely stimulus and the multimedia acceptance level of the target object is only slightly preferred stimulus, the target object may be prompted when playing the first multimedia information: "the XX content is extremely stimulated and may not fit you, please consider whether to continue", or play part of the content of the first multimedia information after the prompt is completed so that the target object decides whether to continue; for example, if the stimulus level of the first multimedia information is mild stimulus and the multimedia acceptance level of the target object is heavy preference stimulus, the target object may be prompted to "the XX content is not stimulated enough, possibly not fit for you, please consider whether to select other XX"; for example, if the stimulus level of the first multimedia information is a heavy stimulus and the multimedia receiving level of the target object is a heavy preference stimulus, the target object may be prompted to "the XX content is extremely stimulated when the first multimedia information is played, so as to meet the requirements of you, and let us feel the prompt such as the feeling bar of heartbeat together.
According to the technical scheme provided by the embodiment, the multimedia acceptance degree information of the target object is determined according to the historical behavior information of the target object aiming at the historical multimedia information, and the first multimedia information is played to the target object according to the multimedia acceptance degree information of the target object. According to the technical scheme provided by the embodiment, the receiving degree of the multimedia of different objects can be finely graded according to the historical behaviors of the different objects, and the first multimedia information can be played to the target object according to the graded receiving degree of the multimedia so as to cover different requirements of the different target objects, so that the user experience is improved.
Fig. 4 is a flowchart illustrating a multimedia information playing method according to an exemplary embodiment. Referring to fig. 4, the above-described multimedia information playing method may include the following steps.
In step S1, a target object is determined.
In step S2, target portrait information of the target object is acquired, where the target portrait information includes multimedia acceptance degree information of the target object, and the target portrait information is determined according to historical behavior information of the target object for historical multimedia information.
In step S3, first multimedia information is acquired, the first multimedia information including target content.
In step S411, first prompt information is determined according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object.
In step S412, the first prompt information is displayed before the first multimedia information is played.
In some embodiments, before the first multimedia information is played, a prompt may be played to the target object according to the stimulation level of the first multimedia information and the media receiving level information of the target object.
For example, assuming that the stimulus level of the first multimedia information is extremely stimulus and the multimedia acceptance level of the target object is only slightly preferred stimulus, the target object may be prompted when playing the first multimedia information: "the XX content is extremely stimulated and may not fit you, please consider whether to continue", or play part of the content of the first multimedia information after the prompt is completed so that the target object decides whether to continue; for example, if the stimulus level of the first multimedia information is mild stimulus and the multimedia acceptance level of the target object is heavy preference stimulus, the target object may be prompted to "the XX content is not stimulated enough, possibly not fit for you, please consider whether to select other XX"; for example, if the stimulus level of the first multimedia information is a heavy stimulus and the multimedia receiving level of the target object is a heavy preference stimulus, the target object may be prompted to "the XX content is extremely stimulated when the first multimedia information is played, so as to meet the requirements of you, and let us feel the prompt such as the feeling bar of heartbeat together.
Fig. 5 is a schematic diagram showing a first hint information according to an exemplary embodiment.
In some embodiments, the first multimedia information may be a target video or a target picture. Assuming that the first multimedia information is a target video, if the multimedia bearing information of the target object does not match the stimulus level of the first multimedia information, the first prompt information "your stimulus preference level is C stimulus evasion person-! More than 66% of users of the rating feel uncomfortable with the current movie. After the user sees the first prompt information, the user can click on the 'continue watching' button to continue watching the video, and can click on the 'replace one' button to replace other videos. It is understood that the disclosure does not limit the content of the first prompt message.
According to the technical scheme provided by the embodiment, whether the first multimedia information is suitable for the target object is determined according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object, and the first prompt information is played to the target object, so that the target object determines whether to continue playing according to the first prompt. According to the technical scheme provided by the embodiment, according to the historical behavior information of the target object aiming at the historical multimedia information, the bearing degree information of the target object aiming at the similar multimedia information is accurately and effectively determined, and the first prompt information is played to the target object according to the multimedia bearing degree information of the target object. The technical scheme provided by the embodiment fully considers different requirements of different objects, and improves user experience.
Fig. 6 is a flowchart illustrating a multimedia information playing method according to an exemplary embodiment. In some embodiments, the target representation information of the target object may further include a first sensitive element of the target object.
In some embodiments, the first sensitive element of the target object may refer to a target object-unadapted element, such as a ghost element, a zombie element, a bloodsmell element, etc. in video, or a terrorist crunchy sound, a wolf howling sound, etc. in audio, to which the present disclosure is not limited.
Referring to fig. 6, the above-described multimedia information playing method may include the following steps.
In step S1, a target object is determined.
In step S2, target portrait information of the target object is acquired, where the target portrait information includes multimedia acceptance degree information of the target object, and the target portrait information is determined according to historical behavior information of the target object for historical multimedia information.
In step S3, first multimedia information is acquired, the first multimedia information including target content.
In step S421, if the first multimedia information includes a first sensitive element of the target object, the first sensitive element is desensitized when the first multimedia information is played.
In some embodiments, the first multimedia information may refer to video information, audio information, or picture information, etc., which the present disclosure does not limit.
In some embodiments, if the first multimedia information is video information and the target frame image in the video includes a first sensitive element of the target object, then desensitizing the sensitive element in the target frame image (such as coding, picture covering or matting out, etc.); if the first multimedia information is audio information and a certain piece of music in the audio information includes a first sensitive element of the target object, desensitizing (e.g. silencing, audio coverage, etc.) the audio segment where the first sensitive element is located is performed. It will be appreciated that the present disclosure is not limited to the desensitizing process in the above embodiments, and is based on actual requirements.
In some embodiments, the target portrait information of the target object may further include sensitive scene information of the target object, where the sensitive scene may refer to "killer scene", "fighting scene", "cemetery scene", and the like, which is not limited in this disclosure.
In some embodiments, the first multimedia information may be played to the target object according to a sensitive scene of the target object. For example, if the first multimedia information to be played includes a sensitive scene of the target object, the target object may be reminded before the sensitive scene is played; relatively comfortable music can be played at the same time of playing the sensitive scene; and the frame image related to the sensitive scene can be desensitized when the sensitive scene is played. The method for playing the sensitive scene in the first multimedia information is not limited, and any playing mode capable of relieving the emotion of the target object is within the protection scope of the present disclosure.
Fig. 7 is a schematic diagram illustrating a coding scheme according to an exemplary embodiment. As shown in fig. 7, if the target frame image in the first media information includes a first sensitive element to which the target object is sensitive, the first sensitive element may be coded in the target frame image. 701 in fig. 7 shows the effect of encoding the first sensitive element.
Fig. 8 is a flowchart illustrating a multimedia information playing method according to an exemplary embodiment.
In some embodiments, the target representation information of the target object may include a second sensitive element of the target object, where the second sensitive element may refer to an element of the target object that is extremely sensitive. For example, assuming that the target object is extremely afraid of elements such as ghost elements, zombie elements, blood fishy elements, etc., the elements such as ghost elements, zombie elements, blood fishy elements, etc., may be the second sensitive element of the target object.
It will be appreciated that extremely sensitive is a relative adjective to indicate that the target object is more responsive to the second sensitive element than to the other sensitive elements.
Referring to fig. 8, the above-described multimedia information playing method may include the following steps.
In step S1, a target object is determined.
In step S2, target portrait information of the target object is acquired, where the target portrait information includes multimedia acceptance degree information of the target object, and the target portrait information is determined according to historical behavior information of the target object for historical multimedia information.
In step S3, first multimedia information is acquired, the first multimedia information including target content.
In step S431, if the first multimedia information includes the second sensitive element of the target object, playing of the first multimedia information is paused before playing the second sensitive element.
In general, when there is a second sensitive element in the first multimedia information to which the target object is extremely sensitive, the target object may be unwilling to see or hear the second sensitive element.
In some embodiments, if the first multimedia information includes the second sensitive element that is extremely sensitive to the target object, the playing may be automatically paused before the multimedia segment in which the second sensitive element is located is played.
In step S432, the second prompt is acquired.
In some embodiments, the second hint information is, for example, hint information that hint target object may next have extremely inadaptation content and ask whether the target object skipped, etc., but the disclosure is not limited in this regard.
In step S433, the second prompt message is displayed.
In some embodiments, the first multimedia information may refer to a target video. Fig. 9 illustrates a second hint information presentation schematic according to an exemplary embodiment. As shown in fig. 9, when the second sensitive element of the target object is about to appear during the playing process of the target video, the playing process of the target video may be paused, and the second prompt information shown in fig. 9, namely, whether the video is to be asked to be continued or not "may be displayed, and the target object may determine whether to continue to watch the video through the" cancel "and" continue "buttons.
It will be appreciated that if the first media information is audio information, the second prompt may be presented to the target object via voice broadcast, and the same target object may decide via voice commands whether to continue listening to the audio information.
According to the technical scheme provided by the embodiment of the disclosure, the second sensitive element extremely sensitive to the target object is automatically determined through the historical behavior information of the target object, the first multimedia information is paused to be played before the second sensitive element is played, and the target object is prompted so that the target object can determine whether to continue playing. By the method, psychological construction can be performed on the target object before the second sensitive element is played, and the phenomenon that the target object is frightened by sudden appearance of the second sensitive element is avoided.
Fig. 10 is a flowchart illustrating a multimedia information playing method according to an exemplary embodiment. In some embodiments, the target representation information of the target object may further include a third sensitive element of the target object.
In some embodiments, the first sensitive element of the target object may refer to a target object-unadapted element, such as a ghost element, a zombie element, a bloodsmell element, etc. in video, or a terrorist crunchy sound, a wolf howling sound, etc. in audio, to which the present disclosure is not limited.
In step S1, a target object is determined.
In step S2, target portrait information of the target object is acquired, where the target portrait information includes multimedia acceptance degree information of the target object, and the target portrait information is determined according to historical behavior information of the target object for historical multimedia information.
In step S3, first multimedia information is acquired, the first multimedia information including target content.
In step S441, if the first multimedia information includes a third sensitive element of the target object, alternative multimedia information is acquired.
In step S442, the playing of the segment where the third sensitive element is located is replaced by the replacing multimedia information.
In some embodiments, if the first multimedia information includes a third sensitive element of the target object, the multimedia segment in which the third sensitive element is located may be replaced with the replacement multimedia information.
In some embodiments, if the first multimedia is a target video and the third sensitive element is present in the target frame image, the target frame image may be replaced by a replacement frame image such that the replacement frame image is played when the target frame image should be played.
In some embodiments, since the multimedia information may be formed by fusing multiple media information, for example, the video multimedia may be formed by fusing pictures and music, it is further possible to determine which media in the target multimedia information the third sensitive element exists in, and replace the media segment information in which the third sensitive element exists with the corresponding alternative multimedia information.
For example, assuming that the third sensitive element exists in the music information of the target video, the audio piece where the third sensitive element exists may be replaced by a music piece that can relax emotion.
In some embodiments, if the first multimedia information is the target audio information, the third sensitive element is present in a certain audio segment, the certain audio segment may be replaced by a replacement audio segment.
According to the technical scheme provided by the embodiment, the multimedia information is replaced by replacing the multimedia segment in which the third sensitive element is positioned, so that the target object is prevented from being frightened, and the emotion of the target object is relieved
Fig. 11 is a flowchart illustrating a multimedia information playing method according to an exemplary embodiment. Referring to fig. 11, the above-described multimedia information playing method may include the following steps.
In step S1, a target object is determined.
In step S2, target portrait information of the target object is acquired, where the target portrait information includes multimedia acceptance degree information of the target object, and the target portrait information is determined according to historical behavior information of the target object for historical multimedia information.
In step S3, first multimedia information is acquired, the first multimedia information including target content.
In step S4, the first multimedia information is played according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object.
In step S5, multimedia information is recommended to the target object according to the target portrait information of the target object.
In some embodiments, corresponding multimedia information may be recommended to the target object based on the multimedia acceptance information in the target representation of the target object and the first sensitive element information. For example, assuming that the multimedia acceptance of the target object is a light preference stimulus, the multimedia information of the general stimulus and not including the first sensitive element to which the target object is sensitive may be recommended to the target object, and the embodiment of the present disclosure is not limited to a specific recommendation method.
FIG. 12 is a diagram illustrating a movie recommendation interface, according to an example embodiment. As shown in fig. 12, if the target object finishes watching the current movie, other videos may be recommended to the target object according to the multimedia tolerance level information of the target object. As shown in fig. 12, the target object may select a favorite video to play according to the recommendation, or may click a "replay" button to review the current movie.
According to the technical scheme provided by the embodiment, the recommendation of the multimedia information can be performed according to the image information of different target objects, so that the click rate of the multimedia information can be increased, and the requirements of the target objects can be met.
Fig. 13 is a flowchart illustrating a multimedia information playing method according to an exemplary embodiment. Referring to fig. 13, the above-described multimedia information playing method may include the following steps.
In step S01, historical behavior information of the target object for the second multimedia information is acquired.
In some embodiments, the target object may give its own stress response through behavior information while watching the target video or listening to the target audio. For example, if a target object suddenly shifts line of sight when seeing a certain video interface, accompanied by a spike, etc., then the target object may be considered sensitive, inadaptable to certain content of that interface; for another example, if the target object suddenly covers the ear after hearing a certain sound, the target object may be considered to be sensitive to some content in the sound.
In some embodiments, the historical behavior information of the target object for the second multimedia information may be determined from a reaction of the target object to viewing or listening to the multimedia information over a period of time.
In some embodiments, if the second multimedia information is a target video, the target object may be presented with a prompt as shown in fig. 14 before viewing the target video, "following the video may cause discomfort, suggesting you complete the stimulus grading test first. If the target object clicks the start test button, a test video may be played to the target object to collect historical behavior information of the target object. During the viewing of the test video, the target object may click on the "exit" button through the interface shown in FIG. 15 to exit the viewing of the test video.
In step S02, a target portrait of the target object is determined according to the historical behavior information, where the target portrait includes multimedia acceptance degree information of the target object.
In some embodiments, the representation of the target object may be determined from the stress response of the target object.
In some embodiments, the number of stress reactions, stress behaviors (such as screaming, etc.) and the stimulus degree of the second multimedia information of the target object can be counted, and the multimedia acceptance degree information of the target object can be determined.
For example, assuming that the degree of stimulation of the second multimedia information is mild stimulation, and the target subject gives stress responses a plurality of times while viewing or listening to the second multimedia information, and each stress response is extremely intense, the degree of multimedia reception of the target subject may be a stimulation avoidance.
In step S03, target multimedia information is played to the target object according to the multimedia acceptance degree information of the target object.
In some embodiments, it may be determined how to play the first multimedia information to the target object based on the stimulation level of the first multimedia information and the multimedia acceptance level information of the target object.
In some embodiments, before the first multimedia information is played, a prompt may be played to the target object according to the stimulation level of the first multimedia information and the media receiving level information of the target object.
For example, assuming that the stimulus level of the first multimedia information is extremely stimulus and the multimedia acceptance level of the target object is only slightly preferred stimulus, the target object may be prompted when playing the first multimedia information: "the XX content is extremely stimulated and may not fit you, please consider whether to continue", or play part of the content of the first multimedia information after the prompt is completed so that the target object decides whether to continue; for example, if the stimulus level of the first multimedia information is mild stimulus and the multimedia acceptance level of the target object is heavy preference stimulus, the target object may be prompted to "the XX content is not stimulated enough, possibly not fit for you, please consider whether to select other XX"; for example, if the stimulus level of the first multimedia information is a heavy stimulus and the multimedia receiving level of the target object is a heavy preference stimulus, the target object may be prompted to "the XX content is extremely stimulated when the first multimedia information is played, so as to meet the requirements of you, and let us feel the prompt such as the feeling bar of heartbeat together.
According to the technical scheme provided by the embodiment, through analysis of the historical behavior information of the target object, the receiving degree information of the target object on the multimedia information is accurately determined, and the receiving degree of the target object is fully considered when the second multimedia information is played to the target object according to the multimedia receiving degree information of the target object. The technical scheme provided by the embodiment can cover different requirements of different objects.
Fig. 16 is a flowchart of step S02 of fig. 13 in an exemplary embodiment. Referring to fig. 16, the above step S02 may include the following steps.
In some embodiments, the second multimedia information may be target video information, the target portrait of the target object may include a sensitive scene of the target object and a first target sensitive element, and the historical behavior information of the target object for the second multimedia information may include stress response information of the target object, first time information corresponding to the stress response information, and eye movement information of the target object.
The sensitive scene may refer to a "killing scene", "fighting scene", "tomb scene", and the like, and the first target sensitive element may refer to, for example, a ghost element, a zombie element, a blood fishy element, and the like, which is not limited in this disclosure.
In step S021, a target frame image is determined in the target video according to the stress response information of the target object and the first time information corresponding to the stress response information.
In some embodiments, the target frame image in the target video may be determined from a first time at which the target object is stressed.
In step S022, a sensitive scene of the target object is determined from the target frame image.
In some embodiments, the target frame image may be processed by a neural network model trained in advance to determine a sensitive scene to which a target object included in the target frame image is sensitive; the target frame image may also be processed by image processing techniques to determine a sensitive scene to which a target object included in the target frame image is sensitive, as this disclosure is not limited in this regard.
The target neural network can be trained through the images to be trained of the known scene, so that the scene neural network in the images can be identified through training in advance, and the target neural network can be a convolution neural network or a circulation neural network, and the method is not limited in this disclosure.
According to the embodiment of the disclosure, the target frame image can be determined in the target video through the stress response information of the target object and the corresponding first time information, and the scene in the target frame image can be determined through the image processing technology (or the neural network technology), so that the sensitivity scene of the target object can be known.
In some embodiments, the first target sensitive element of the target object may also be determined in the target frame image based on eye movement information of the target object.
In some embodiments, the eye movement information of the target object in the first time can be tracked through an eye movement tracking technology to determine the area where the eyeball position of the target object is changed before and after the stress reaction of the target object occurs. For example, before the stress reaction occurs, the target object is always in the tracking area a, and after the stress reaction occurs, the tracking area of the target object is suddenly and rapidly changed from the area a to the other areas, and then the element in the area a corresponding to the first time can be considered as the first target sensitive element.
Among these, eye tracking techniques are procedures that measure eye operation. The most interesting event for eye tracking studies is the determination of where a human or animal looks (e.g. "gaze point" or "gaze point"). More precisely, the image processing technology is carried out through the instrument and equipment, the pupil position is positioned, the coordinates are obtained, and the eye gazing or gazing point is calculated through a certain algorithm, so that a computer knows where and when you are looking.
In some embodiments, since the target video is multimedia information composed of an image and a sound, the first target sensitive element of the target object may exist in the audio. Therefore, when the eye tracking technology does not track the first target sensitive element, the audio segment corresponding to the first time information corresponding to the stress response information of the target object can be used as the first target sensitive element.
According to the embodiment of the disclosure, the target frame image can be determined in the target video through the stress response information of the target object and the corresponding first time information thereof, then the first target sensitive element of the target object can be accurately determined in the target frame image through the eye tracking technology, and the first target sensitive element can be determined in the audio information of the target video through the stress response information and the corresponding first time information thereof.
Fig. 17 is a flowchart of step S022 in fig. 16 in an exemplary embodiment.
In some embodiments, the eye movement information of the target object may include position information and movement information of an eyeball of the target object.
Referring to fig. 17, the above step S022 may include the following steps.
In step S0221, a line-of-sight region of the target object is determined in the target frame image based on the positional information of the eyeball.
In step S0222, a first target sensitive element of the target object is determined from a line-of-sight area of the target object and movement information of the eyeball.
According to the embodiment of the disclosure, the target frame image can be determined in the target video through the stress response information of the target object and the corresponding first time information thereof, then the first target sensitive element of the target object can be accurately determined in the target frame image through the eye tracking technology, and the first target sensitive element can be determined in the audio information of the target video through the stress response information and the corresponding first time information thereof.
In some embodiments, the second multimedia information may be target audio information, the target portrait of the target object may further include a second target sensitive element of the target object, and the historical behavior information of the target object for the historical multimedia information may include stress response information of the target object and second time information corresponding to the stress response information.
In some embodiments, the second target sensitive element of the target object may be determined in the target audio according to the stress response information of the target object and the second time information corresponding to the stress response information.
Fig. 18 is a flowchart of step S03 of fig. 13 in an exemplary embodiment. Referring to fig. 18, the above step S03 may include the following steps.
Referring to fig. 18, the above step S03 may include the following steps.
In step S031, the target multimedia information is acquired, and the target multimedia information includes target content.
In step S032, the target multimedia information is played to the target object according to the target content of the target multimedia information and the multimedia acceptance degree information of the target object.
In some embodiments, how to play the target multimedia information to the target object may be determined based on the stimulation level of the target multimedia information and the multimedia acceptance level information of the target object.
In some embodiments, before playing the target multimedia information, a prompt may be played to the target object according to the stimulation level of the target multimedia information and the media receiving level information of the target object.
For example, assuming that the stimulus level of the first multimedia information is extremely stimulus and the multimedia acceptance level of the target object is only slightly preferred stimulus, the target object may be prompted when playing the first multimedia information: "the XX content is extremely stimulated and may not fit you, please consider whether to continue", or play part of the content of the first multimedia information after the prompt is completed so that the target object decides whether to continue; for example, if the stimulus level of the first multimedia information is mild stimulus and the multimedia acceptance level of the target object is heavy preference stimulus, the target object may be prompted to "the XX content is not stimulated enough, possibly not fit for you, please consider whether to select other XX"; for example, if the stimulus level of the first multimedia information is a heavy stimulus and the multimedia receiving level of the target object is a heavy preference stimulus, the target object may be prompted to "the XX content is extremely stimulated when the first multimedia information is played, so as to meet the requirements of you, and let us feel the prompt such as the feeling bar of heartbeat together.
In some embodiments, the desensitization processing may be performed on the sensitive element sensitive to the target object when the target multimedia information is played, the replacement processing may be performed on the sensitive element of the target object, the pause processing may be performed when the sensitive element of the target object is played, and the disclosure is not limited thereto.
According to the multimedia information playing method provided by the embodiment of the disclosure, the multimedia acceptance degree information of the target object is determined according to the historical behavior information of the target object aiming at the historical multimedia information, and the first multimedia information is played to the target object according to the multimedia acceptance degree information of the target object. According to the technical scheme provided by the embodiment, the receiving degree of the target object on the multimedia is finely graded according to the historical behaviors of the target object, the requirements of different target objects are covered, and the user experience is improved.
Fig. 19 is a diagram illustrating a multimedia playing method according to an exemplary embodiment. As shown in fig. 19, the multimedia playing method involves a plurality of execution subjects such as a user, a client, and a server background.
Referring to fig. 19, the above-described multimedia playing method may include the following steps.
In step S001, the user selects a movie through the client interface.
In step S002, the client determines whether the current movie contains the motivational content.
If the current movie does not include the stimulated content, executing step S003 to directly play the movie; if the current movie includes the stimulus content, step S004 is executed to prompt the user to perform the rating test.
In step S005, the user decides whether to accept the test.
If the user does not accept the grading test, executing step S003 to directly play the film; if the user accepts the grading test, step S006 is executed, i.e. the user views the test piece.
In step S007 to step S009, while the user views the test section, the eye movement tracking system of the client tracks the eye movement information of the user, the behavior recognition system recognizes the stress behavior of the user, the voice recognition system recognizes the stress utterance of the user, and sends the eye movement information, the stress behavior, the stress utterance and other information of the user to the server background.
And S010, analyzing sensitive elements of the user by the server background according to the eye movement information, the stress behaviors, the stress words and the like of the user, generating a grading portrait of the user, and sending grading results in the grading portrait and viewing suggestions generated according to the grading results to the client.
And step S011, the server background determines a symbol to be coded according to the hierarchical portrait and sends a large code request to the client.
Step S012, the client performs coding processing on the symbol to be coded according to the coding request, and displays the grading result and the viewing suggestion to the user.
In step S013, the user views the movie according to the rating result.
In step S014, the server background selects a suitable movie from the movie pool for recommendation according to the hierarchical representation of the user, and sends the recommendation result to the client.
In step S015, the client presents the recommendation result after the movie is received.
Step S016, the user browses the recommended film and continues to consume.
Fig. 20 is a block diagram illustrating a multimedia information playing apparatus according to an exemplary embodiment. Referring to fig. 20, a multimedia information playing device 2000 provided by an embodiment of the present disclosure may include: a target object determination module 2001, a target portrait determination module 2002, a first multimedia information acquisition module 2003, and a first playback module 2004.
Wherein the target object determination module 2001 may be configured to determine a target object. The target representation determination module 2002 may be configured to obtain target representation information of the target object, the target representation information comprising multimedia acceptance information of the target object, wherein the target representation information is determined based on historical behavior information of the target object for historical multimedia information. The first multimedia information acquisition module 2003 may be configured to acquire first multimedia information including target content. The first playing module 2004 may be configured to play the first multimedia information according to the target content of the first multimedia information and the multimedia acceptance information of the target object.
In some embodiments, the first playing module 2004 may include: the device comprises a first prompt information determining unit and a first prompt information displaying unit.
The first prompt information determining unit may be configured to determine the first prompt information according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object. The first cue information display unit may be configured to display the first cue information before playing the first multimedia information.
In some embodiments, the target representation information further includes a first sensitive element of the target object
In some embodiments, the first playing module 2004 may include: a desensitizing unit.
In fact, the desensitizing unit may be configured to desensitize the first sensitive element when playing the first multimedia information if the first multimedia information includes the first sensitive element of the target object.
In some embodiments, the target representation information includes a second sensitive element of the target object.
In some embodiments, the first playing module 2004 may include: the system comprises a pause unit, a second prompt information acquisition unit and a second prompt information display unit.
The pause unit may be configured to pause the playing of the first multimedia information before playing the second sensitive element if the first multimedia information includes the second sensitive element of the target object. The second hint information obtaining unit may be configured to obtain the second hint information. The second hint information display unit may be configured to display the second hint information.
In some embodiments, the target representation information further includes a third sensitive element of the target object.
In some embodiments, the first playing module 2004 may include: a substitute multimedia information acquisition unit and a substitute unit.
Wherein the substitute multimedia information obtaining unit may be configured to obtain the substitute multimedia information if the first multimedia information includes a third sensitive element of the target object. The replacing unit may be configured to replace the playing of the segment where the third sensitive element is located by the replacing multimedia information.
In some embodiments, the multimedia information playing device 2000 may further include: and a recommendation module.
Wherein the recommendation module may be configured to recommend multimedia information to the target object according to target portrait information of the target object.
Since the respective functional modules of the multimedia information playing device 2000 of the exemplary embodiment of the present disclosure correspond to the steps of the exemplary embodiment of the multimedia information playing method described above, the description thereof will not be repeated here.
Fig. 21 is a block diagram illustrating a multimedia information playing apparatus according to an exemplary embodiment. Referring to fig. 21, a multimedia information playing apparatus 2100 provided by an embodiment of the present disclosure may include: a historical behavior determination information module 2101, a target portraits determination module 2102, and a second play module 2103.
Wherein the historical behavior determination information module 2101 may be configured to obtain historical behavior information of the target object viewing the second multimedia information. The target representation determination module 2102 may be configured to determine a target representation of the target object based on the historical behavior information, the target representation including multimedia acceptance information of the target object. The second playing module 2103 may be configured to play target multimedia information to the target object according to the multimedia acceptance degree information of the target object.
In some embodiments, the second multimedia information is target video information, the target portrait further includes a sensitive scene of the target object, and the historical behavior information includes stress response information of the target object and first time information corresponding to the stress response information.
In some embodiments, the target representation determination module 2102 may include: a target frame image determining unit and a sensitive scene determining unit.
The target frame image determining unit may be configured to determine a target frame image in the target video according to stress response information of the target object and first time information corresponding to the stress response information. The sensitive scene determination unit may be configured to determine a sensitive scene of the target object from the target frame image.
In some embodiments, the target representation further comprises a first target sensitive element of the target object, and the historical behavior further information comprises eye movement information of the target object.
In some embodiments, the target representation determination module 2102 may include: a first object-sensitive element determination unit. Wherein the first target sensitive element determining unit may be configured to determine a first target sensitive element of the target object in the target frame image according to eye movement information of the target object.
In some embodiments, the eye movement information includes position information and movement information of an eyeball of the target object.
In some embodiments, the first target sensitive element determination unit may include: the line-of-sight region determination subunit and the first object-sensitive element determination subunit.
Wherein the line-of-sight region determination subunit may be configured to determine the line-of-sight region of the target object in the target frame image according to the positional information of the eyeball. The first target sensitive element determination subunit may be configured to determine a first target sensitive element of the target object according to a line-of-sight area of the target object and movement information of the eyeball.
In some embodiments, the second multimedia information is target audio information, the target portrait further includes a second target sensitive element of the target object, and the historical behavior information includes stress response information of the target object and second time information corresponding to the stress response information.
In some embodiments, the target representation determination module 2102 may include: and a second target sensitive element determining unit.
The second target sensitive element determining unit may be configured to determine a second target sensitive element of the target object in the target audio according to stress response information of the target object and second time information corresponding to the stress response information.
In some embodiments, the second playing module 2103 may include: and the target multimedia information acquisition unit and the third playing unit.
Wherein the target multimedia information acquisition unit may be configured to acquire the target multimedia information including target content. The third playing unit may be configured to play the target multimedia information to the target object according to the target content of the target multimedia information and the multimedia acceptance degree information of the target object.
Since the respective functional modules of the multimedia information playing device 2100 according to the example embodiment of the present disclosure correspond to the steps of the example embodiment of the multimedia information playing method described above, the description thereof will not be repeated here.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, aspects of embodiments of the present disclosure may be embodied in a software product, which may be stored on a non-volatile storage medium (which may be a CD-ROM, a U-disk, a mobile hard disk, etc.), comprising instructions for causing a computing device (which may be a personal computer, a server, a mobile terminal, or a smart device, etc.) to perform a method in accordance with embodiments of the present disclosure, such as one or more of the steps shown in fig. 3.
Furthermore, the above-described figures are only schematic illustrations of processes included in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the disclosure is not to be limited to the details of construction, the manner of drawing, or the manner of implementation, which has been set forth herein, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (18)

1. A multimedia information playing method, comprising:
Determining a target object;
acquiring target portrait information of the target object, wherein the target portrait information comprises multimedia acceptance degree information of the target object, the multimedia acceptance degree information is determined according to stress response when the target object views or listens to stimulus content in historical multimedia information, and the multimedia acceptance degree information is used for measuring the acceptance degree of the target object to the stimulus content; wherein the target representation further comprises a first target sensitive element of the target object; the method for obtaining the target portrait information of the target object comprises the following steps: playing the historical multimedia information to the target object; obtaining stress response of the target object when receiving the historical multimedia information and first time information corresponding to the stress response, and obtaining eye movement information of the target object while obtaining the stress response, wherein the eye movement information comprises position information and movement information of eyeballs of the target object; determining a target frame image in the historical multimedia information according to the stress response information of the target object and the first time information corresponding to the stress response information; determining a sight line area of the target object in the target frame image according to the eyeball position information of the target object; determining a first target sensitive element of the target object according to the sight line area of the target object and the movement information of the eyeball so as to determine the stimulation content relative to the target object in first multimedia information according to the first target sensitive element;
Acquiring first multimedia information, wherein the first multimedia information comprises target content, and the target content comprises stimulation content;
and controlling the playing of the stimulation content relative to the target object in the first multimedia information according to the multimedia acceptance degree information of the target object.
2. The method according to claim 1, wherein controlling playback of the stimulated content in the first multimedia information with respect to the target object according to the multimedia acceptance degree information of the target object includes:
determining first prompt information according to the target content of the first multimedia information and the multimedia acceptance degree information of the target object:
and displaying the first prompt information before the first multimedia information is played.
3. The method of claim 1, wherein the target representation information further comprises a first sensitive element of the target object; wherein controlling the playing of the stimulus content in the first multimedia information relative to the target object according to the multimedia acceptance degree information of the target object comprises:
and if the first multimedia information comprises the first sensitive element of the target object, performing desensitization processing on the first sensitive element when the first multimedia information is played.
4. The method of claim 1, wherein the target representation information comprises a second sensitive element of the target object; wherein controlling the playing of the stimulus content in the first multimedia information relative to the target object according to the multimedia acceptance degree information of the target object comprises:
if the first multimedia information comprises a second sensitive element of the target object, suspending playing of the first multimedia information before playing the second sensitive element;
and displaying the second prompt information.
5. The method of claim 1, wherein the target representation information further comprises a third sensitive element of the target object; wherein controlling the playing of the stimulus content in the first multimedia information relative to the target object according to the multimedia acceptance degree information of the target object comprises:
if the first multimedia information comprises a third sensitive element of the target object, acquiring alternative multimedia information;
and replacing the playing of the fragment where the third sensitive element is located by the replacing multimedia information.
6. The method as recited in claim 1, further comprising:
And recommending the multimedia information to the target object according to the target portrait information of the target object.
7. A multimedia information playing method, comprising:
acquiring a play request of target multimedia information;
before playing the target multimedia information, playing a test video to a target object to prompt the target object to perform a stimulus grading test, wherein the test video is second multimedia information;
obtaining stress response of the target object when watching or listening to the stimulus content in the test video;
determining a target portrait of the target object according to the stress response, wherein the target portrait comprises multimedia acceptance degree information, and the multimedia acceptance degree information is used for measuring the acceptance degree of the target object on the stimulation content;
after the stimulation grading test is completed, the stimulation content in the target multimedia information is played to the target object according to the multimedia acceptance degree information of the target object;
when the second multimedia information is target video information, the target portrait further comprises a sensitive scene of the target object; wherein determining the target portrait of the target object according to the stress response comprises: determining a target frame image in the target video according to the stress response information of the target object and the first time information corresponding to the stress response information; determining a sensitive scene of the target object according to the target frame image so as to determine the stimulation content in the target multimedia information according to the sensitive scene;
The target portrait further comprises a first target sensitive element of the target object, and eye movement information of the target object is obtained while the stress response is obtained; wherein determining the target portrait of the target object according to the stress response comprises: determining the first target sensitive element of the target object in the target frame image according to the eye movement information of the target object;
the eye movement information includes position information and movement information of an eyeball of the target object; wherein determining the first target sensitive element of the target object in the target frame image according to the eye movement information of the target object comprises: determining a sight line region of the target object in the target frame image according to the eyeball position information; and determining a first target sensitive element of the target object according to the sight line area of the target object and the movement information of the eyeball so as to determine the stimulation content relative to the target object in target multimedia information according to the first target sensitive element.
8. The method of claim 7, wherein the second multimedia information is target audio information, the target representation further includes a second target sensitive element of the target object, and the second time information corresponding to the stress response information is obtained while the stress response is obtained; wherein determining the multimedia acceptance degree information of the target object according to the stress response so as to generate a target portrait according to the multimedia acceptance degree information comprises the following steps:
And determining a second target sensitive element of the target object in the target audio according to the stress response information of the target object and the second time information corresponding to the stress response information.
9. A multimedia information playing device, comprising:
a target object determination module configured to determine a target object;
a target portrait determination module configured to obtain target portrait information of the target object, where the target portrait information includes multimedia acceptance degree information of the target object, where the multimedia acceptance degree information is determined according to stress response when the target object views or listens to stimulus content in historical multimedia information, and the multimedia acceptance degree information is used to measure acceptance degree of the target object to the stimulus content; wherein the target representation further comprises a first target sensitive element of the target object; the method for obtaining the target portrait information of the target object comprises the following steps: playing the historical multimedia information to the target object; obtaining stress response of the target object when receiving the historical multimedia information and first time information corresponding to the stress response, and obtaining eye movement information of the target object while obtaining the stress response, wherein the eye movement information comprises position information and movement information of eyeballs of the target object; determining a target frame image in the historical multimedia information according to the stress response information of the target object and the first time information corresponding to the stress response information; determining a sight line area of the target object in the target frame image according to the eyeball position information of the target object; determining a first target sensitive element of the target object according to the sight line area of the target object and the movement information of the eyeball so as to determine the stimulation content relative to the target object in first multimedia information according to the first target sensitive element;
A first multimedia information acquisition module configured to acquire first multimedia information including target content including stimulus content;
and the first playing module is configured to control the playing of the stimulation content in the first multimedia information according to the multimedia acceptance degree information of the target object.
10. The apparatus of claim 9, wherein the first playback module comprises:
a first prompt information determining unit configured to determine first prompt information according to target content of the first multimedia information and multimedia acceptance degree information of the target object;
the first prompt information display unit is configured to display the first prompt information before the first multimedia information is played.
11. The apparatus of claim 9, wherein the target representation information further comprises a first sensitive element of the target object; wherein, the first play module includes:
and the desensitization unit is configured to desensitize the first sensitive element when the first multimedia information is played if the first multimedia information comprises the first sensitive element of the target object.
12. The apparatus of claim 9, wherein the target representation information comprises a second sensitive element of the target object; wherein the first playing module comprises
A pause unit configured to pause playing of the first multimedia information before playing the second sensitive element if the first multimedia information includes the second sensitive element of the target object;
the second prompt information display unit is configured to display the second prompt information.
13. The apparatus of claim 9, wherein the target representation information comprises a third sensitive element of the target object; wherein, the first play module includes:
the replacing multimedia information obtaining unit is configured to obtain replacing multimedia information if the first multimedia information comprises a third sensitive element of the target object;
and the replacing unit is configured to replace the playing of the fragment where the third sensitive element is located by the replacing multimedia information.
14. The apparatus of claim 9, wherein the multimedia information playing apparatus further comprises:
and the recommending module is configured to recommend the multimedia information to the target object according to the target portrait information of the target object.
15. A multimedia information playing device, comprising:
the historical behavior determination information module is configured to acquire a play request of the target multimedia information;
before playing the target multimedia information, playing a test video to a target object to prompt the target object to perform a stimulus grading test, wherein the test video is second multimedia information;
obtaining stress response of the target object when watching or listening to the stimulus content in the test video;
a target representation determining module configured to determine a target representation of the target object based on the stress response, the target representation including multimedia acceptance level information, wherein the multimedia acceptance level information is used to measure the acceptance level of the target object for the stimulus content;
the second playing module is configured to play the stimulation content in the target multimedia information to the target object according to the multimedia acceptance degree information of the target object after the stimulation grading test is completed;
when the second multimedia information is target video information, the target portrait further comprises a sensitive scene of the target object; wherein the target portrait determination module includes:
A target frame image determining unit configured to determine a target frame image in the target video according to stress response information of the target object and first time information corresponding to the stress response information;
a sensitive scene determination module configured to determine a sensitive scene of the target object from the target frame image, so as to determine the stimulus content in the target multimedia information from the sensitive scene;
the target portrait further comprises a first target sensitive element of the target object, and the historical behavior information further comprises eye movement information of the target object; wherein the target portrait determination module includes:
a first target sensitive element determining unit configured to determine a first target sensitive element of the target object in the target frame image according to eye movement information of the target object;
the eye movement information includes position information and movement information of an eyeball of the target object; wherein the first target sensitive element determination unit includes:
a sight-line area determination subunit configured to determine a sight-line area of the target object in the target frame image according to the position information of the eyeball;
A first target sensitive element determination subunit configured to determine a first target sensitive element of the target object according to a line-of-sight area of the target object and movement information of the eyeball, so as to determine a stimulus content relative to the target object in target multimedia information according to the first target sensitive element.
16. The apparatus of claim 15, wherein the second multimedia information is target audio information, the target representation further comprises a second target sensitive element of the target object, and the historical behavior information comprises stress response information of the target object and second time information corresponding to the stress response information; wherein the target portrait determination module includes:
and the second target sensitive element determining unit is configured to determine a second target sensitive element of the target object in the target audio according to the stress response information of the target object and second time information corresponding to the stress response information.
17. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-8.
18. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-8.
CN202010600706.7A 2020-06-28 2020-06-28 Multimedia information playing method and device, electronic equipment and storage medium Active CN111654752B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010600706.7A CN111654752B (en) 2020-06-28 2020-06-28 Multimedia information playing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010600706.7A CN111654752B (en) 2020-06-28 2020-06-28 Multimedia information playing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111654752A CN111654752A (en) 2020-09-11
CN111654752B true CN111654752B (en) 2024-03-26

Family

ID=72348496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010600706.7A Active CN111654752B (en) 2020-06-28 2020-06-28 Multimedia information playing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111654752B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086772B (en) * 2022-06-10 2023-09-05 咪咕互动娱乐有限公司 Video desensitization method, device, equipment and storage medium
CN115225967A (en) * 2022-06-24 2022-10-21 网易(杭州)网络有限公司 Video processing method, video processing device, storage medium and computer equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105611412A (en) * 2015-12-22 2016-05-25 小米科技有限责任公司 Video file playing method, video clip determining method and device
CN107995523A (en) * 2017-12-21 2018-05-04 广东欧珀移动通信有限公司 Video broadcasting method, device, terminal and storage medium
CN108063979A (en) * 2017-12-26 2018-05-22 深圳Tcl新技术有限公司 Video playing control method, device and computer readable storage medium
CN108966011A (en) * 2018-07-13 2018-12-07 北京七鑫易维信息技术有限公司 A kind of control method for playing back, device, terminal device and storage medium
CN110225398A (en) * 2019-05-28 2019-09-10 腾讯科技(深圳)有限公司 Multimedia object playback method, device and equipment and computer storage medium
CN110909242A (en) * 2019-11-27 2020-03-24 北京奇艺世纪科技有限公司 Data pushing method, device, server and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105159990B (en) * 2015-08-31 2019-02-01 北京奇艺世纪科技有限公司 A kind of method and apparatus of media data grading control

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105611412A (en) * 2015-12-22 2016-05-25 小米科技有限责任公司 Video file playing method, video clip determining method and device
CN107995523A (en) * 2017-12-21 2018-05-04 广东欧珀移动通信有限公司 Video broadcasting method, device, terminal and storage medium
CN108063979A (en) * 2017-12-26 2018-05-22 深圳Tcl新技术有限公司 Video playing control method, device and computer readable storage medium
CN108966011A (en) * 2018-07-13 2018-12-07 北京七鑫易维信息技术有限公司 A kind of control method for playing back, device, terminal device and storage medium
CN110225398A (en) * 2019-05-28 2019-09-10 腾讯科技(深圳)有限公司 Multimedia object playback method, device and equipment and computer storage medium
CN110909242A (en) * 2019-11-27 2020-03-24 北京奇艺世纪科技有限公司 Data pushing method, device, server and storage medium

Also Published As

Publication number Publication date
CN111654752A (en) 2020-09-11

Similar Documents

Publication Publication Date Title
Kim et al. Vrsa net: Vr sickness assessment considering exceptional motion for 360 vr video
KR102444165B1 (en) Apparatus and method for providing a meeting adaptively
WO2020018607A1 (en) Dynamic digital content processing and generation for a virtual environment
KR20210004951A (en) Content creation and control using sensor data for detection of neurophysiological conditions
JP2019212288A (en) Method and device for outputting information
CN106663219A (en) Methods and systems of handling a dialog with a robot
CN111654752B (en) Multimedia information playing method and device, electronic equipment and storage medium
WO2020148920A1 (en) Information processing device, information processing method, and information processing program
JP7107302B2 (en) Information processing device, information processing method, and program
JP2022536126A (en) Emotion recognition method and emotion recognition device using the same
CN110767005A (en) Data processing method and system based on intelligent equipment special for children
CN116484318A (en) Lecture training feedback method, lecture training feedback device and storage medium
CN114090862A (en) Information processing method and device and electronic equipment
KR102325506B1 (en) Virtual reality-based communication improvement system and method
CN108563322B (en) Control method and device of VR/AR equipment
CN113633870A (en) Emotional state adjusting system and method
Heck Presentation adaptation for multimodal interface systems: three essays on the effectiveness of user-centric content and modality adaptation
Branco Computer-based facial expression analysis for assessing user experience
US20220408153A1 (en) Information processing device, information processing method, and information processing program
US20240012860A1 (en) Systems, methods and computer readable media for special needs service provider matching and reviews
US20240070251A1 (en) Using facial skin micromovements to identify a user
Teófilo Enabling deaf or hard of hearing accessibility in live theaters through virtual reality
WO2022212052A1 (en) Stress detection
EP4314997A1 (en) Attention detection
Pandey Lip Reading as an Active Mode of Interaction with Computer Systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant