CN112637662A - Method, device and storage medium for generating vibration sense of media image - Google Patents

Method, device and storage medium for generating vibration sense of media image Download PDF

Info

Publication number
CN112637662A
CN112637662A CN202011579323.2A CN202011579323A CN112637662A CN 112637662 A CN112637662 A CN 112637662A CN 202011579323 A CN202011579323 A CN 202011579323A CN 112637662 A CN112637662 A CN 112637662A
Authority
CN
China
Prior art keywords
vibration
time point
vibration effect
effect
media image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011579323.2A
Other languages
Chinese (zh)
Inventor
梁哲
禤良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202011579323.2A priority Critical patent/CN112637662A/en
Publication of CN112637662A publication Critical patent/CN112637662A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/458Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules ; time-related management operations

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure relates to a method, an apparatus and a storage medium for generating a vibration sensation of a media image, wherein the method for generating the vibration sensation of the media image comprises: playing the media image to be processed; monitoring subtitles loaded in the playing process of the media image to be processed, wherein the subtitles have a corresponding relation with the vibration effect; and generating a vibration effect corresponding to the subtitle. The method and the device can improve the touch synchronization precision of the media image.

Description

Method, device and storage medium for generating vibration sense of media image
Technical Field
The present disclosure relates to the field of vibration effect technologies, and in particular, to a method, an apparatus, and a storage medium for generating vibration for a media image.
Background
With the development of science and technology, the application field of the seismological effect is more and more extensive. For example, it is a popular application function to play media images such as video and generate a sense of vibration for the media images.
In the related art, a timer is usually set or a haptic effect file with the same duration is set to generate a vibration sense for a media image. However, the timer is affected by the interruption preemption delay and the high priority task, and the initial time error occurs when the haptic effect files are respectively initialized, so the above method for generating the vibration sense of the video file is easy to cause the problem of vibration sense asynchronization.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a method, an apparatus, and a storage medium for generating a vibration sensation in a media image.
According to a first aspect of the embodiments of the present disclosure, there is provided a method for generating a sense of vibration for a media image, which is applied to a terminal, the method for generating a sense of vibration for a media image including:
playing the media image to be processed; monitoring subtitles loaded in the playing process of the media image to be processed, wherein the subtitles have a corresponding relation with the vibration effect; and generating a vibration effect corresponding to the subtitle.
In one embodiment, a triggering time point for triggering to generate a vibration sense is set in the subtitle, and the triggering time point and the vibration sense effect tag have a corresponding relationship.
Determining and generating a tremolo effect corresponding to the subtitle based on the monitored subtitle, including: responding to the monitored triggering time point in the caption, and determining a vibration effect label corresponding to the triggering time point; based on the seismoscopic effect tag, a seismoscopic effect corresponding to the seismoscopic effect tag is generated.
In one embodiment, the correspondence between the trigger time point and the vibration effect tag is determined as follows:
determining a time axis for playing a media image to be processed and a subtitle file loaded on the time axis; and setting a triggering time point needing to trigger a vibration effect and a vibration effect label corresponding to the triggering time point in the subtitle file, wherein the triggering time point is a time point on the time axis.
In one embodiment, the monitoring the trigger time point in the subtitle includes:
monitoring a time axis for playing the media image to be processed; and determining that the trigger time point in the caption is monitored in response to monitoring that the time point on the time axis is matched with the trigger time point.
In one embodiment, a vibration effect actuator is installed in the terminal; based on the seismoscopic effect label, produce a seismoscopic effect corresponding to the seismoscopic effect label, including:
calling a vibration effect parameter corresponding to the vibration effect label in a vibration effect parameter library based on the vibration effect label; and controlling the vibration effect actuator to generate a vibration effect matched with the vibration effect parameters based on the vibration effect parameters.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for generating a sense of vibration for a media image, which is applied to a terminal, the apparatus for generating a sense of vibration for a media image including:
the playing unit is used for playing the media image to be processed; the monitoring unit is used for monitoring the subtitles loaded in the playing process of the media image to be processed, and the subtitles have a corresponding relation with the vibration effect; and the vibration unit is used for generating a vibration effect corresponding to the subtitle.
In one embodiment, a triggering time point for triggering to generate a vibration sense is set in the subtitle, and the triggering time point and the vibration sense effect tag have a corresponding relation; the monitoring unit determines and generates a vibration effect corresponding to the subtitle based on the monitored subtitle in the following way:
responding to the monitored triggering time point in the caption, and determining a vibration effect label corresponding to the triggering time point; based on the seismoscopic effect tag, a seismoscopic effect corresponding to the seismoscopic effect tag is generated.
In one embodiment, the playing unit is further configured to determine a correspondence between the trigger time point and the tremolo effect tag as follows:
determining a time axis for playing a media image to be processed and a subtitle file loaded on the time axis; and setting a triggering time point needing to trigger a vibration effect and a vibration effect label corresponding to the triggering time point in the subtitle file, wherein the triggering time point is a time point on the time axis.
In one embodiment, the monitoring unit determines that the trigger time point in the subtitle is monitored by the following method:
monitoring a time axis for playing the media image to be processed; and determining that the trigger time point in the caption is monitored in response to monitoring that the time point on the time axis is matched with the trigger time point.
In one embodiment, a vibration effect actuator is installed in the terminal; the vibrations unit adopts following mode based on the seismosis effect label produces with the seismosis effect that the seismosis effect label corresponds:
calling a vibration effect parameter corresponding to the vibration effect label in a vibration effect parameter library based on the vibration effect label; and controlling the vibration effect actuator to generate a vibration effect matched with the vibration effect parameters based on the vibration effect parameters.
According to a third aspect of the embodiments of the present disclosure, there is provided an apparatus for generating a sense of vibration for a media image, comprising:
a processor; a memory for storing processor-executable instructions;
wherein the processor is configured to: a method of generating a sensation of vibration in a media image as described in the first aspect or any one of the embodiments of the first aspect is performed.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, wherein instructions of the storage medium, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the method for generating a sense of vibration for a media image as described in the first aspect or any one of the implementation manners of the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: determining and playing the media image to be processed, and monitoring the subtitles loaded in the playing process of the media image to be processed. Wherein, the caption and the vibration effect have corresponding relation. And determining and generating a vibration effect corresponding to the subtitle based on the monitored subtitle, so that a synchronous vibration effect can be generated for the media image.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a method for generating a sensation of vibration in a media image according to an exemplary embodiment.
FIG. 2 is a flow diagram illustrating a method for generating a sensation of vibration based on a media image of a vibration effect tag, according to an example embodiment.
FIG. 3 is a flow diagram illustrating a method for generating a tremolo effect corresponding to a tremolo effect tag for a media image according to an example embodiment.
FIG. 4 is a flow chart illustrating a method for generating a sensation of vibration for a media image according to an exemplary embodiment.
FIG. 5 is a block diagram of an apparatus for generating a sensation of vibration for a media image according to an exemplary embodiment.
Fig. 6 is a block diagram illustrating an apparatus for generating a sensation of vibration for a media image according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The method for generating the vibration sense of the media image provided by the embodiment of the disclosure can be applied to scenes for adding the vibration sense effect to the media images such as images and videos. For example, the method can be used in a scene for adding a vibration effect to a video played in the terminal.
In the related art, in one embodiment, the generation of the vibration sensation for the media image is realized by setting a timer. For example, in the process of playing media images, a timer is preset at a time point corresponding to a need of generating a vibration effect based on the playing time axis of the media images. The generation of the seismic effect is triggered based on the timer, for example, when the timer reaches a time point at which the seismic effect is generated, the transmission of the seismic signal is triggered by the timer, and the seismic signal is transmitted to the vibration actuator. The vibration actuator generates vibration feedback based on the received vibration sensing signal. In the related art, another method for generating a vibration sensation for a media image is to set a vibration sensation effect file with the same time length for the media image, and play the media image and the vibration sensation effect file simultaneously based on the vibration sensation effect file with the same length as the media image to generate the vibration sensation for the media image.
However, in the above implementation of generating a sense of vibration for a media image, the media image and the sense of vibration are triggered by different processes. If the performance timeliness of different processes are different, the asynchronism occurs, and the synchronization precision of the media image and the vibration sense is unstable. For example, the time of the timer is based on the time beat of the system, so that only one clock beat period can be timed at minimum, and then the timing arrives, which is influenced by interrupt preemption delay and high-priority tasks, so that the precision synchronization of the timer is unstable. The media image and the vibration effect file are started and called simultaneously, if two started processes are controlled to have initial time errors during respective initialization, and different processes in the playing process are possibly subjected to the influence of different resource preemption, the phenomenon that the media image and the vibration are asynchronous during playing is more and more obvious. Therefore, in the implementation of generating a vibration for a media image in the related art, there is a limitation that the synchronization between the media image and the vibration effect cannot be well completed.
In view of the above, the embodiments of the present disclosure provide a method for generating a sense of vibration for a media image, in which a sense of vibration effect is added to a subtitle file of the media image, and a correspondence between a subtitle in the subtitle file and the sense of vibration effect is created. By monitoring the subtitles, the vibration effect corresponding to the subtitles is recalled, and the precision of time synchronization between the media image and the vibration generation is improved.
Fig. 1 is a flowchart illustrating a method for generating a vibration of a media image according to an exemplary embodiment, where the method for generating a vibration of a media image is used in a terminal as shown in fig. 1, and includes the following steps.
In step S11, the media image to be processed is played.
In the embodiment of the present disclosure, the media image to be processed is a media image to which a vibration effect needs to be added, and may be a movie or a short video that a mobile phone needs to play. For example, the terminal may record a video recorded by a camera.
Further, the media image to be processed determined in the embodiment of the present disclosure contains a subtitle file. When the vibration effect is added, the required vibration effect can be added through the corresponding subtitle content at the corresponding time point in the subtitle file. The vibration effect added in the embodiment of the present disclosure may be understood as a vibration effect required when the subtitle content is played for a certain time point.
In step S12, subtitles loaded during the playing process of the media image to be processed are monitored, and there is a corresponding relationship between the subtitles and the vibration effect.
In step S13, a tremolo effect corresponding to the subtitle is generated.
The embodiment of the present disclosure will be described below with reference to practical applications, where the process of generating a sense of vibration for a media image in the above embodiment is described.
In one embodiment of the present disclosure, a correspondence between subtitles and a vibration effect may be created in advance.
In the embodiment of the present disclosure, a vibration effect tag may be created for a vibration effect, and a correspondence between the vibration effect tag and a subtitle may be created. Wherein, the vibration effect label can be understood as the identification of the vibration effect corresponding to the caption content. The tremolo effect tag may be the name or Identification (ID) of the vibration effect.
For example, a trigger time point for triggering generation of a sense of vibration is set in a subtitle based on a time point for generating a sense of vibration in a subtitle file, and a correspondence between the trigger time point and a sense of vibration effect tag is created.
FIG. 2 is a flow diagram illustrating a method for generating a sensation of vibration based on a media image of a vibration effect tag, according to an example embodiment. The implementation process of steps S21 and S22 is similar to that of steps S11 and S12 in fig. 1, and the description of the same parts is omitted here.
In the embodiment of the disclosure, a triggering time point for triggering to generate a vibration sense is set in the subtitle, and the triggering time point and the vibration sense effect tag have a corresponding relationship.
The corresponding relationship between the trigger time point and the vibration effect tag may be preset. In one embodiment, the trigger time point may be determined based on a time axis for playing the to-be-processed media image, and a correspondence between the trigger time point and the tremolo effect tag may be created. For example, a time axis for playing the media image to be processed and a subtitle file loaded on the time axis are determined. And setting a trigger time point needing to trigger the vibration effect in the subtitle file, wherein the set trigger time point is a time point on a time axis. And setting a vibration effect label corresponding to the trigger time point in the subtitle file.
In an embodiment of the present disclosure, a tremolo effect tag may be set for a subtitle content at any time point of a media image in a subtitle file by determining a time axis for playing the media image to be processed. Wherein, the vibration effect label can be understood as the identification of the vibration effect corresponding to the caption content. The tremolo effect tag may be the name or Identification (ID) of the vibration effect. Through the identification of the vibration effect label by the follow-up caption monitoring, the corresponding vibration effect is generated, and therefore the vibration effect is better added. For example, a vibration effect tag B may be correspondingly set for a time point a of a subtitle, and in a media image playing process, the time point a of the subtitle may be monitored and identified by a subtitle monitor, so that a vibration effect corresponding to the time point a may be obtained based on a corresponding relationship between the time point a and the vibration effect tag B.
In an embodiment of the disclosure, the trigger time point of the vibration effect may be determined based on a time axis of the to-be-processed media image. For example, the time point of each subtitle content corresponding to the time axis may be obtained by determining the time axis for playing the media image to be processed, and loading the subtitle file attached to the time axis. In one example, the subtitle file marks and divides each time point of the time axis of the media image, and when the subtitle file is loaded, the vibration effect tag added in the subtitle file is loaded. It can be understood that the subtitle contents corresponding to each time point have independence, the same subtitle content has different time characteristics at different time points, and different subtitle contents are also independent of each other at the same time point.
In step S23, in response to the detection of the trigger time point in the subtitle, a vibration effect tag corresponding to the trigger time point is determined.
In the embodiment of the disclosure, in response to monitoring the trigger time point in the subtitle, the vibration effect tag corresponding to the trigger time point is determined, which can be understood as that the subtitle monitor screens the subtitle content to which the vibration effect needs to be added based on the time characteristic and the subtitle content characteristic. Wherein, the vibration effect label is the text content that needs to add the vibration effect at the time point that needs to add the vibration effect.
It can be understood that the subtitle monitoring method related to the embodiment of the present disclosure is not limited to the identification of the trigger time point containing the vibration effect tag under the time axis, and may also be useful information with other identification features, for example, the subtitle monitoring method may be used for identifying and monitoring useful information such as text content, audio, pixels and the like corresponding to the vibration effect tag under the time axis, so as to achieve the purpose of monitoring and identifying the vibration effect tag.
In step S24, based on the sense effect label, a sense effect corresponding to the sense effect label is generated.
According to the method for generating the vibration of the media image, provided by the embodiment of the disclosure, the generation of the vibration effect is realized by monitoring and calling back the subtitles, so that whether the time point of playing the media image is matched with the time point defined in the subtitles can be continuously monitored in real time, the synchronization of triggering the vibration effect and playing the media image is realized, and the matching precision of the vibration effect is improved.
In one example, for example, the user may generate a vibration when watching a built-in video of a mobile phone, when the user opens the built-in video, the video and a subtitle file are loaded simultaneously, the subtitle file includes a time point and a vibration effect tag, and the monitor may call back the vibration effect tag in the subtitle at the corresponding time point on the time axis, so as to generate a corresponding vibration.
In the embodiment of the present disclosure, a vibration effect parameter library may be provided, and parameters for generating various vibration effects are stored in the vibration effect parameter library. For example, the parameter corresponding to the vibration effect of the vibration effect tag is stored. The vibration effect parameter can be understood as a parameter for controlling the vibration effect actuator in the terminal to generate vibration.
FIG. 3 is a flow diagram illustrating a method for generating a tremolo effect corresponding to a tremolo effect tag for a media image according to an example embodiment. As shown in fig. 3, generating a seismic effect corresponding to the seismic effect tag based on the seismic effect tag includes the following steps.
In step S231, based on the vibration effect tag, a vibration effect parameter corresponding to the vibration effect tag is called in the vibration effect parameter library.
Wherein, the vibration effect parameters comprise vibration frequency, vibration amplitude and other parameters capable of changing the vibration effect. For example, for a vibration effect of slight rapid vibration, the low-amplitude vibration amplitude and the rapid vibration frequency in the vibration effect parameter library are called as effect parameters of the vibration effect, and the effect parameters are transmitted to the vibration effect actuator for execution, so that a corresponding vibration effect is obtained.
In an embodiment of the present disclosure, a media image to be processed is subjected to subtitle monitoring, so as to identify a vibration effect tag that needs to generate a vibration effect, and subtitle content corresponding to the vibration effect tag is transmitted to a vibration actuator as an effect parameter required by a vibration method. The vibration actuator completes adjustment of vibration parameters such as vibration amplitude and frequency based on the obtained effect parameters, so that corresponding vibration is generated, a corresponding vibration effect is obtained, and vibration effect addition to the terminal is achieved.
In step S232, the vibration effect actuator is controlled to generate a vibration effect matching the vibration effect parameter based on the vibration effect parameter.
In one example, the vibration actuator receives the vibration effect parameters transmitted by the caption monitor, generates vibration matched with the vibration effect parameters, and transmits the vibration to the human body part contacted with the terminal equipment. For example, the vibration transmitted by the mobile phone can be received by the hand of a person when the person watches videos by using the mobile phone. The synchronous output of sound, video and vibration enables the viewer to keep high consistency in touch, hearing and vision, thereby improving the viewing experience of the viewer.
In an embodiment of the disclosure, a vibration effect corresponding to the vibration effect tag may be generated based on the vibration effect tag. For example, in the embodiment of the present disclosure, the information corresponding to the vibration effect tag may be transmitted to the vibration actuator through the subtitle monitor, and the vibration actuator generates the vibration effect matching the obtained information. For example, based on the vibration effect tag corresponding to the marked time point, the subtitle monitor identifies the subtitle content corresponding to the vibration effect tag, outputs and compares the corresponding subtitle content with the vibration effect parameter library to obtain the vibration effect parameter corresponding to the subtitle content, and outputs the vibration effect parameter to the vibration actuator, so as to generate vibration matched with the vibration effect tag.
FIG. 4 is a flow chart illustrating a method for generating a sensation of vibration for a media image according to an exemplary embodiment. As shown in fig. 4, an original media image without a shake effect is input, and a trigger time point and a shake effect tag required for a shake effect are set in a subtitle file of the original media image, so as to obtain a to-be-processed media image. And playing the media image to be processed and monitoring the subtitle of the media image to be processed. When the caption monitor does not recognize the marked time point and the vibration effect label, the caption monitor keeps monitoring along with the playing of the media image. When the caption monitor identifies the marking time point and the vibration effect label, the caption monitor outputs the caption content corresponding to the vibration effect label. And taking the caption content as a vibration effect label, comparing the output caption content with a vibration effect parameter library, and determining a vibration effect parameter corresponding to the caption content. And transmitting the vibration effect parameters to a vibration actuator, wherein the vibration actuator generates corresponding vibration based on the obtained vibration effect parameters, so that the vibration effect marked by the vibration effect label is obtained. The method and the device can enable the viewer to experience the vibration effect matched with the content of the media image while watching the media image.
In general, a part directly contacting with a terminal device generating vibration is a hand, a hand vibration is one of the most acute parts of the whole body, and a person can obviously feel a vibration difference of 50 ms. Any sensory delay is noticed and amplified, thereby disrupting the user experience. In the embodiment of the disclosure, the inherent subtitle file of the media image is set, so that the vibration effect addition based on one time axis is realized, the media image and the vibration effect are synchronously generated, and the user experience is improved.
For example, when the user uses the terminal device, the problems of unsynchronized feeling of vibration do not occur in the situations of game screen jamming, video fast forward and the like because the same time axis is used for adding vibration effects. And the synchronization of the calling of the sense of vibration can be still kept when the user fast forwards and fast backwards plays the video through the method of synchronizing the subtitles.
Based on the same concept, the embodiment of the disclosure also provides a device for generating a vibration sense of a media image.
It is understood that the apparatus for generating a vibration sense of a media image according to the embodiments of the present disclosure includes hardware structures and/or software modules for performing the above functions. The disclosed embodiments can be implemented in hardware or a combination of hardware and computer software, in combination with the exemplary elements and algorithm steps disclosed in the disclosed embodiments. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
FIG. 5 is a block diagram of an apparatus for generating a sensation of vibration for a media image according to an exemplary embodiment. Referring to fig. 5, the apparatus 100 for generating a vibration sense of a media image is applied to a terminal and includes a playing unit 101, a monitoring unit 102 and a vibrating unit 103.
The playing unit 101 is used for playing the media image to be processed. The monitoring unit 102 is configured to monitor subtitles loaded in a playing process of a media image to be processed, where the subtitles have a correspondence with a vibration effect. And a vibration unit 103 for generating a vibration effect corresponding to the subtitle.
In one embodiment, a triggering time point for triggering to generate a vibration sense is set in the subtitle, and the triggering time point and the vibration sense effect tag have a corresponding relationship. The monitoring unit 102 determines a vibration effect tag corresponding to the trigger time point in response to monitoring the trigger time point in the subtitle, and generates a vibration effect corresponding to the vibration effect tag based on the vibration effect tag.
In one embodiment, the playing unit 101 is further configured to determine a corresponding relationship between the trigger time point and the vibration effect tag.
The playing unit 101 is configured to determine a corresponding relationship between the trigger time point and the vibration effect tag in the following manner: and determining a time axis for playing the media image to be processed and a subtitle file loaded on the time axis. And setting a triggering time point needing to trigger the vibration effect and a vibration effect label corresponding to the triggering time point in the subtitle file, wherein the triggering time point is a time point on a time axis.
In one embodiment, the monitoring unit 102 determines the trigger time point in the monitored caption by: and monitoring a time axis for playing the media image to be processed. And determining the trigger time point in the monitored caption in response to the monitored time point on the time axis being matched with the trigger time point.
In one embodiment, a vibration effect actuator is mounted in the terminal. Vibrations unit 103 adopts following mode based on the sense of earthquake effect label, produces the sense of earthquake effect that corresponds with the sense of earthquake effect label: based on the seismoscopic effect tag, calling a seismoscopic effect parameter corresponding to the seismoscopic effect tag in a seismoscopic effect parameter library. And controlling the vibration effect actuator to generate a vibration effect matched with the vibration effect parameters based on the vibration effect parameters.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 6 is a block diagram illustrating an apparatus 200 for generating a sensation of vibration for a media image according to an exemplary embodiment. For example, the apparatus 200 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, the apparatus 200 may include one or more of the following components: a processing component 202, a memory 204, a power component 206, a multimedia component 208, an audio component 210, an input/output (I/O) interface 212, a sensor component 214, and a communication component 216.
The processing component 202 generally controls overall operation of the device 200, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 202 may include one or more processors 220 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 202 can include one or more modules that facilitate interaction between the processing component 202 and other components. For example, the processing component 202 can include a multimedia module to facilitate interaction between the multimedia component 208 and the processing component 202.
The memory 204 is configured to store various types of data to support operations at the apparatus 200. Examples of such data include instructions for any application or method operating on the device 200, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 204 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 206 provide power to the various components of device 200. Power components 206 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 200.
The multimedia component 208 includes a screen that provides an output interface between the device 200 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 208 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 200 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 210 is configured to output and/or input audio signals. For example, audio component 210 includes a Microphone (MIC) configured to receive external audio signals when apparatus 200 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 204 or transmitted via the communication component 216. In some embodiments, audio component 210 also includes a speaker for outputting audio signals.
The I/O interface 212 provides an interface between the processing component 202 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 214 includes one or more sensors for providing various aspects of status assessment for the device 200. For example, the sensor assembly 214 may detect an open/closed state of the device 200, the relative positioning of components, such as a display and keypad of the device 200, the sensor assembly 214 may also detect a change in the position of the device 200 or a component of the device 200, the presence or absence of user contact with the device 200, the orientation or acceleration/deceleration of the device 200, and a change in the temperature of the device 200. The sensor assembly 214 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 214 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 214 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 216 is configured to facilitate wired or wireless communication between the apparatus 200 and other devices. The device 200 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 216 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 216 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 200 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as memory 204, comprising instructions executable by processor 220 of device 200 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It is understood that "a plurality" in this disclosure means two or more, and other words are analogous. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. The singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It will be further understood that the terms "first," "second," and the like are used to describe various information and that such information should not be limited by these terms. These terms are only used to distinguish one type of information from another and do not denote a particular order or importance. Indeed, the terms "first," "second," and the like are fully interchangeable. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure.
It will be further understood that, unless otherwise specified, "connected" includes direct connections between the two without the presence of other elements, as well as indirect connections between the two with the presence of other elements.
It is further to be understood that while operations are depicted in the drawings in a particular order, this is not to be understood as requiring that such operations be performed in the particular order shown or in serial order, or that all illustrated operations be performed, to achieve desirable results. In certain environments, multitasking and parallel processing may be advantageous.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A method for generating a vibration of a media image is applied to a terminal, and the method for generating the vibration of the media image comprises the following steps:
playing the media image to be processed;
monitoring subtitles loaded in the playing process of the media image to be processed, wherein the subtitles have a corresponding relation with the vibration effect;
and generating a vibration effect corresponding to the subtitle.
2. The method of claim 1, wherein the subtitles have a triggering time point for triggering generation of the vibration, and the triggering time point and the vibration effect tag have a corresponding relationship;
determining and generating a tremolo effect corresponding to the subtitle based on the monitored subtitle, including:
responding to the monitored triggering time point in the caption, and determining a vibration effect label corresponding to the triggering time point;
based on the seismoscopic effect tag, a seismoscopic effect corresponding to the seismoscopic effect tag is generated.
3. The method of claim 2, wherein the correspondence between the trigger time point and the tremolo effect tag is determined as follows:
determining a time axis for playing a media image to be processed and a subtitle file loaded on the time axis;
and setting a triggering time point needing to trigger a vibration effect and a vibration effect label corresponding to the triggering time point in the subtitle file, wherein the triggering time point is a time point on the time axis.
4. The method of claim 2, wherein the monitoring the triggering time point in the caption comprises:
monitoring a time axis for playing the media image to be processed;
and determining that the trigger time point in the caption is monitored in response to monitoring that the time point on the time axis is matched with the trigger time point.
5. The method of any one of claims 2 to 4, wherein a vibration effect actuator is installed in the terminal;
based on the seismoscopic effect label, produce a seismoscopic effect corresponding to the seismoscopic effect label, including:
calling a vibration effect parameter corresponding to the vibration effect label in a vibration effect parameter library based on the vibration effect label;
and controlling the vibration effect actuator to generate a vibration effect matched with the vibration effect parameters based on the vibration effect parameters.
6. An apparatus for generating a vibration sense of a media image, applied to a terminal, the apparatus for generating a vibration sense of a media image comprising:
the playing unit is used for playing the media image to be processed;
the monitoring unit is used for monitoring the subtitles loaded in the playing process of the media image to be processed, and the subtitles have a corresponding relation with the vibration effect;
and the vibration unit is used for generating a vibration effect corresponding to the subtitle.
7. The apparatus for generating a sense of vibration of a media image as claimed in claim 6, wherein the subtitles have a triggering time point for triggering generation of a sense of vibration, and the triggering time point and the sense of vibration effect tag have a corresponding relationship;
the monitoring unit determines and generates a vibration effect corresponding to the subtitle based on the monitored subtitle in the following way:
responding to the monitored triggering time point in the caption, and determining a vibration effect label corresponding to the triggering time point;
based on the seismoscopic effect tag, a seismoscopic effect corresponding to the seismoscopic effect tag is generated.
8. The apparatus for generating a sense of vibration of a media image as claimed in claim 7, wherein the playing unit is further configured to determine the corresponding relationship between the triggering time point and the sense of vibration effect tag by:
determining a time axis for playing a media image to be processed and a subtitle file loaded on the time axis;
and setting a triggering time point needing to trigger a vibration effect and a vibration effect label corresponding to the triggering time point in the subtitle file, wherein the triggering time point is a time point on the time axis.
9. The apparatus for generating a tremolo in a media image according to claim 8, wherein the monitoring unit determines the time point of monitoring the trigger in the subtitle as follows:
monitoring a time axis for playing the media image to be processed;
and determining that the trigger time point in the caption is monitored in response to monitoring that the time point on the time axis is matched with the trigger time point.
10. The device for generating a tremolo of media images according to any one of claims 6 to 9, wherein a vibration effect actuator is installed in the terminal;
the vibrations unit adopts following mode based on the seismosis effect label produces with the seismosis effect that the seismosis effect label corresponds:
calling a vibration effect parameter corresponding to the vibration effect label in a vibration effect parameter library based on the vibration effect label;
and controlling the vibration effect actuator to generate a vibration effect matched with the vibration effect parameters based on the vibration effect parameters.
11. An apparatus for generating a sensation of vibration in a media image, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the method of generating a tremolo of a media image of any of claims 1 to 5.
12. A non-transitory computer readable storage medium having instructions stored thereon that, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the method of generating a tremolo in a media image of any one of claims 1 to 5.
CN202011579323.2A 2020-12-28 2020-12-28 Method, device and storage medium for generating vibration sense of media image Pending CN112637662A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011579323.2A CN112637662A (en) 2020-12-28 2020-12-28 Method, device and storage medium for generating vibration sense of media image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011579323.2A CN112637662A (en) 2020-12-28 2020-12-28 Method, device and storage medium for generating vibration sense of media image

Publications (1)

Publication Number Publication Date
CN112637662A true CN112637662A (en) 2021-04-09

Family

ID=75325508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011579323.2A Pending CN112637662A (en) 2020-12-28 2020-12-28 Method, device and storage medium for generating vibration sense of media image

Country Status (1)

Country Link
CN (1) CN112637662A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202533946U (en) * 2012-03-29 2012-11-14 爱卡拉互动媒体股份有限公司 Situation instruction system
CN103366074A (en) * 2012-03-29 2013-10-23 爱卡拉互动媒体股份有限公司 Situational command system and operation method
US20150070150A1 (en) * 2013-09-06 2015-03-12 Immersion Corporation Method and System For Providing Haptic Effects Based on Information Complementary to Multimedia Content
CN107682642A (en) * 2017-09-19 2018-02-09 广州艾美网络科技有限公司 Identify the method, apparatus and terminal device of special video effect triggered time point
CN109873902A (en) * 2018-12-29 2019-06-11 努比亚技术有限公司 Result of broadcast methods of exhibiting, device and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202533946U (en) * 2012-03-29 2012-11-14 爱卡拉互动媒体股份有限公司 Situation instruction system
CN103366074A (en) * 2012-03-29 2013-10-23 爱卡拉互动媒体股份有限公司 Situational command system and operation method
US20150070150A1 (en) * 2013-09-06 2015-03-12 Immersion Corporation Method and System For Providing Haptic Effects Based on Information Complementary to Multimedia Content
CN107682642A (en) * 2017-09-19 2018-02-09 广州艾美网络科技有限公司 Identify the method, apparatus and terminal device of special video effect triggered time point
CN109873902A (en) * 2018-12-29 2019-06-11 努比亚技术有限公司 Result of broadcast methods of exhibiting, device and computer readable storage medium

Similar Documents

Publication Publication Date Title
EP3125530B1 (en) Video recording method and device
WO2022042089A1 (en) Interaction method and apparatus for live broadcast room
US20170304735A1 (en) Method and Apparatus for Performing Live Broadcast on Game
CN111314768A (en) Screen projection method, screen projection device, electronic equipment and computer readable storage medium
JP6285615B2 (en) Remote assistance method, client, program, and recording medium
EP3107086A1 (en) Method and device for playing a multimedia file
EP3163887A1 (en) Method and apparatus for performing media synchronization
CN110324689B (en) Audio and video synchronous playing method, device, terminal and storage medium
EP3147802B1 (en) Method and apparatus for processing information
US10270614B2 (en) Method and device for controlling timed task
RU2666626C1 (en) Playback state controlling method and device
KR20170023769A (en) Event prompting mehtod and device
CN112019893A (en) Screen projection method and screen projection device of terminal
CN107132769B (en) Intelligent equipment control method and device
CN111970456A (en) Shooting control method, device, equipment and storage medium
CN108769769B (en) Video playing method and device and computer readable storage medium
CN106792024B (en) Multimedia information sharing method and device
CN114430453B (en) Camera anti-shake system, control method, equipment and medium
CN105637851B (en) Computing system, method and computer readable medium for peripheral control
CN113835518A (en) Vibration control method and device, vibration device, terminal and storage medium
CN107948876B (en) Method, device and medium for controlling sound box equipment
CN111610856B (en) Vibration feedback method, vibration feedback device and storage medium
CN112637662A (en) Method, device and storage medium for generating vibration sense of media image
CN111246012B (en) Application interface display method and device and storage medium
CN108769780B (en) Advertisement playing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination