CN109873902A - Result of broadcast methods of exhibiting, device and computer readable storage medium - Google Patents
Result of broadcast methods of exhibiting, device and computer readable storage medium Download PDFInfo
- Publication number
- CN109873902A CN109873902A CN201811642138.6A CN201811642138A CN109873902A CN 109873902 A CN109873902 A CN 109873902A CN 201811642138 A CN201811642138 A CN 201811642138A CN 109873902 A CN109873902 A CN 109873902A
- Authority
- CN
- China
- Prior art keywords
- information
- video
- content
- screen
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The present embodiments relate to intelligent terminal technical fields, and in particular to a kind of result of broadcast methods of exhibiting, device and computer readable storage medium.The described method includes: obtaining screen shows content;When the screen shows that content is video content, video feature information is obtained;Currently playing scene information is obtained according to the video feature information;The deformation of the screen is controlled according to the broadcasting scene information.The present invention is by obtaining the video content shown on screen, then video feature information is obtained by video content again, the corresponding broadcasting scene of each video feature information, broadcasting scene according to corresponding to it controls the deformation of screen, to show the stereoscopic effect under special scenes environment to user, video is watched for user, better sensory experience is provided.
Description
Technical field
The present embodiments relate to intelligent terminal technical fields, and in particular to a kind of result of broadcast methods of exhibiting, device and
Computer readable storage medium.
Background technique
In daily life and work, mobile phone gradually becomes the indispensable electronic equipment of people.People can pass through hand
Machine quickly relates to the friend and relatives for wanting connection.Amusement and recreation can also be carried out by the software downloaded in mobile phone,
Such as it game, listens song, see video etc..The communication exchange being not only convenient between people, while also enriching daily life.
Especially as the development of film and tv industry, the video resources such as various types of films, TV play, entertainment video emerge one after another.People can
To watch video by various application software.
Now with the development of science and technology, the visual field that more and more 3D films with visual effect appear in people is worked as
In.Three-dimensional bandwagon effect to people, greatly impact and shake by the bring in the process of viewing film.Therefore, also more next
More people likes watching 3D film.But such film is merely able in cinema at present by wearing 3D eyes
Or watched using more high-end VR/AR equipment, it can not be realized on the mobile phone or plate that people are commonly used, it is right
Input cost is higher for user.If can show 3D effect in the intelligent terminals such as mobile phone or plate, can give
People's daily life and viewing video resource bring great enjoyment.
The description of the above-mentioned discovery procedure to problem, is only used to facilitate the understanding of the technical scheme, and does not represent and holds
Recognizing above content is the prior art.
Summary of the invention
In order to solve the above-mentioned technical problem or it at least is partially solved above-mentioned technical problem, the embodiment of the invention provides
A kind of result of broadcast methods of exhibiting, device and computer readable storage medium.
In view of this, in a first aspect, the embodiment of the present invention provides a kind of result of broadcast methods of exhibiting, which comprises
It obtains screen and shows content;
When the screen shows that content is video content, video feature information is obtained;
Currently playing scene information is obtained according to the video feature information;
The deformation of the screen is controlled according to the broadcasting scene information.
Optionally, described when the screen shows that content is video content, video feature information is obtained, according to the view
Frequency characteristic information obtains the detailed process of currently playing scene information are as follows:
When the screen shows that content is video content, the caption information of the video content is obtained;
Keyword extraction is carried out to the caption information and obtains key word information;
Broadcasting scene information corresponding with the key word information is obtained according to the key word information.
Optionally, before the currently playing scene information according to video feature information acquisition, the method is also
Include:
Broadcasting scene library is preestablished, includes between key word information and broadcasting scene information in the broadcasting scene library
Corresponding relationship.
Optionally, described when the screen shows that content is video content, video feature information is obtained, according to the view
Frequency characteristic information obtains the detailed process of currently playing scene information are as follows:
When the screen shows that content is video content, the voice messaging of the video content is obtained;
Speech recognition is carried out to the voice messaging and obtains voice characteristics information;
Broadcasting scene information corresponding with the voice characteristics information is obtained according to the voice characteristics information.
Optionally, before the currently playing scene information according to video feature information acquisition, the method is also
Include:
Broadcasting scene library is preestablished, includes between voice characteristics information and broadcasting scene information in the broadcasting scene library
Corresponding relationship.
Optionally, the method also includes:
To the video feature information and play scene information progress deep learning acquisition learning outcome;
The broadcasting scene library is updated according to the learning outcome.
Optionally, the detailed process of the deformation that the screen is controlled according to the broadcasting scene information are as follows:
Obtain the picture that current video content is shown;
The picture is identified, pictorial feature information is obtained;
Shape control is carried out according to position of the broadcasting scene information to screen locating for the pictorial feature information.
Optionally, before the acquisition screen shows content, the method also includes:
The selection information for whether opening special effect play is sent to user;
When user selects to open special effect play, obtains screen and show content.
Second aspect, the embodiment of the present invention also provide a kind of result of broadcast displaying device, and the result of broadcast shows device
Including processor, the processor is configured with the executable operational order of processor, to execute side as described in relation to the first aspect
The step of method.
The third aspect, the embodiment of the present invention also propose a kind of computer readable storage medium, the computer-readable storage
The step of medium storing computer instruction, the computer instruction makes the computer execute method as described in relation to the first aspect.
Compared with prior art, the result of broadcast methods of exhibiting that the embodiment of the present invention proposes can be applicable to the intelligence of flexible screen eventually
In end.By obtaining the video content shown on screen, video feature information, each view are then obtained by video content again
Frequency characteristic information corresponds to a broadcasting scene, and the broadcasting scene according to corresponding to it controls the deformation of screen, thus to
User shows the stereoscopic effect under special scenes environment, watches video for user and provides better sensory experience.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be in embodiment or description of the prior art
Required attached drawing is briefly described, it should be apparent that, the accompanying drawings in the following description is only some realities of the invention
Example is applied, it for those of ordinary skill in the art, without any creative labor, can also be attached according to these
Figure obtains other attached drawings.
Fig. 1 is a kind of hardware structural diagram of mobile terminal of each embodiment of the present invention;
Fig. 2 is a kind of communications network system architecture diagram provided in an embodiment of the present invention;
Fig. 3 is the flow chart of result of broadcast methods of exhibiting described in the embodiment of the present invention 1;
Fig. 4 is a kind of video pictures schematic diagram of the embodiment of the present invention;
Fig. 5 is a kind of structural schematic diagram for mobile terminal that the embodiment of the present invention 3 provides.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In subsequent description, it is only using the suffix for indicating such as " module ", " component " or " unit " of element
Be conducive to explanation of the invention, itself there is no a specific meaning.Therefore, " module ", " component " or " unit " can mix
Ground uses.
Terminal can be implemented in a variety of manners.For example, terminal described in the present invention may include such as mobile phone, plate
Computer, laptop, palm PC, personal digital assistant (Personal Digital Assistant, PDA), portable
Media player (Portable Media Player, PMP), navigation device, wearable device, Intelligent bracelet, pedometer etc. move
The fixed terminals such as dynamic terminal, and number TV, desktop computer.
It will be illustrated by taking mobile terminal as an example in subsequent descriptions, it will be appreciated by those skilled in the art that in addition to special
Except element for moving purpose, the construction of embodiment according to the present invention can also apply to the terminal of fixed type.
Referring to Fig. 1, a kind of hardware structural diagram of its mobile terminal of each embodiment to realize the present invention, the shifting
Dynamic terminal 100 may include: RF (Radio Frequency, radio frequency) unit 101, WiFi module 102, audio output unit
103, A/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit
108, the components such as memory 109, processor 110 and power supply 111.It will be understood by those skilled in the art that shown in Fig. 1
Mobile terminal structure does not constitute the restriction to mobile terminal, and mobile terminal may include components more more or fewer than diagram,
Perhaps certain components or different component layouts are combined.
It is specifically introduced below with reference to all parts of the Fig. 1 to mobile terminal:
Radio frequency unit 101 can be used for receiving and sending messages or communication process in, signal sends and receivees, specifically, by base station
Downlink information receive after, to processor 110 handle;In addition, the data of uplink are sent to base station.In general, radio frequency unit 101
Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, penetrating
Frequency unit 101 can also be communicated with network and other equipment by wireless communication.Any communication can be used in above-mentioned wireless communication
Standard or agreement, including but not limited to GSM (Global System of Mobile communication, global system for mobile telecommunications
System), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (Code
Division Multiple Access 2000, CDMA 2000), WCDMA (Wideband Code Division
Multiple Access, wideband code division multiple access), TD-SCDMA (Time Division-Synchronous Code
Division Multiple Access, TD SDMA), FDD-LTE (Frequency Division
Duplexing-Long Term Evolution, frequency division duplex long term evolution) and TDD-LTE (Time Division
Duplexing-Long Term Evolution, time division duplex long term evolution) etc..
WiFi belongs to short range wireless transmission technology, and mobile terminal can help user to receive and dispatch electricity by WiFi module 102
Sub- mail, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.Although Fig. 1 shows
Go out WiFi module 102, but it is understood that, and it is not belonging to must be configured into for mobile terminal, it completely can be according to need
It to omit within the scope of not changing the essence of the invention.
Audio output unit 103 can be in call signal reception pattern, call mode, record mould in mobile terminal 100
When under the isotypes such as formula, speech recognition mode, broadcast reception mode, by radio frequency unit 101 or WiFi module 102 it is received or
The audio data stored in memory 109 is converted into audio signal and exports to be sound.Moreover, audio output unit 103
Audio output relevant to the specific function that mobile terminal 100 executes can also be provided (for example, call signal receives sound, disappears
Breath receives sound etc.).Audio output unit 103 may include loudspeaker, buzzer etc..
A/V input unit 104 is for receiving audio or video signal.A/V input unit 104 may include graphics processor
(Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode
Or the image data of the static images or video obtained in image capture mode by image capture apparatus (such as camera) carries out
Reason.Treated, and picture frame may be displayed on display unit 106.Through graphics processor 1041, treated that picture frame can be deposited
Storage is sent in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.Mike
Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042
Quiet down sound (audio data), and can be audio data by such acoustic processing.Audio that treated (voice) data can
To be converted to the format output that can be sent to mobile communication base station via radio frequency unit 101 in the case where telephone calling model.
Microphone 1042 can be implemented various types of noises elimination (or inhibition) algorithms and send and receive sound to eliminate (or inhibition)
The noise generated during frequency signal or interference.
Mobile terminal 100 further includes at least one sensor 105, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 1061, and proximity sensor can close when mobile terminal 100 is moved in one's ear
Display panel 1061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axis) size of acceleration, it can detect that size and the direction of gravity when static, can be used to identify the application of mobile phone posture
(such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.;
The fingerprint sensor that can also configure as mobile phone, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer,
The other sensors such as hygrometer, thermometer, infrared sensor, details are not described herein.
Display unit 106 is for showing information input by user or being supplied to the information of user.Display unit 106 can wrap
Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 1061.
User input unit 107 can be used for receiving the number or character information of input, and generate the use with mobile terminal
Family setting and the related key signals input of function control.Specifically, user input unit 107 may include touch panel 1071 with
And other input equipments 1072.Touch panel 1071, also referred to as touch screen collect the touch operation of user on it or nearby
(for example user uses any suitable objects or attachment such as finger, stylus on touch panel 1071 or in touch panel 1071
Neighbouring operation), and corresponding attachment device is driven according to preset formula.Touch panel 1071 may include touch detection
Two parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation band
The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it
It is converted into contact coordinate, then gives processor 110, and order that processor 110 is sent can be received and executed.In addition, can
To realize touch panel 1071 using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves.In addition to touch panel
1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 can wrap
It includes but is not limited in physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, operating stick etc.
It is one or more, specifically herein without limitation.
Further, touch panel 1071 can cover display panel 1061, when touch panel 1071 detect on it or
After neighbouring touch operation, processor 110 is sent to determine the type of touch event, is followed by subsequent processing device 110 according to touch thing
The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, touch panel 1071 and display panel
1061 be the function that outputs and inputs of realizing mobile terminal as two independent components, but in certain embodiments, it can
The function that outputs and inputs of mobile terminal is realized so that touch panel 1071 and display panel 1061 is integrated, is not done herein specifically
It limits.
Interface unit 108 be used as at least one external device (ED) connect with mobile terminal 100 can by interface.For example,
External device (ED) may include wired or wireless headphone port, external power supply (or battery charger) port, wired or nothing
Line data port, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number
It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in mobile terminal 100 or can be with
For transmitting data between mobile terminal 100 and external device (ED).
Memory 109 can be used for storing software program and various data.Memory 109 can mainly include storing program area
The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function
Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as
Audio data, phone directory etc.) etc..In addition, memory 109 may include high-speed random access memory, it can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of mobile terminal, utilizes each of various interfaces and the entire mobile terminal of connection
A part by running or execute the software program and/or module that are stored in memory 109, and calls and is stored in storage
Data in device 109 execute the various functions and processing data of mobile terminal, to carry out integral monitoring to mobile terminal.Place
Managing device 110 may include one or more processing units;Preferably, processor 110 can integrate application processor and modulatedemodulate is mediated
Manage device, wherein the main processing operation system of application processor, user interface and application program etc., modem processor is main
Processing wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Mobile terminal 100 can also include the power supply 111 (such as battery) powered to all parts, it is preferred that power supply 111
Can be logically contiguous by power-supply management system and processor 110, to realize management charging by power-supply management system, put
The functions such as electricity and power managed.
Although Fig. 1 is not shown, mobile terminal 100 can also be including bluetooth module etc., and details are not described herein.
Embodiment to facilitate the understanding of the present invention, the communications network system that mobile terminal of the invention is based below into
Row description.
Referring to Fig. 2, Fig. 2 is a kind of communications network system architecture diagram provided in an embodiment of the present invention, the communication network system
System is the LTE system of universal mobile communications technology, which includes UE (User Equipment, the use of successively communication connection
Family equipment) (the land Evolved UMTS Terrestrial Radio Access Network, evolved UMTS 201, E-UTRAN
Ground wireless access network) 202, EPC (Evolved Packet Core, evolved packet-based core networks) 203 and operator IP operation
204。
Specifically, UE201 can be above-mentioned terminal 100, and details are not described herein again.
E-UTRAN202 includes eNodeB2021 and other eNodeB2022 etc..Wherein, eNodeB2021 can be by returning
Journey (backhaul) (such as X2 interface) is connect with other eNodeB2022, and eNodeB2021 is connected to EPC203,
ENodeB2021 can provide the access of UE201 to EPC203.
EPC203 may include MME (Mobility Management Entity, mobility management entity) 2031, HSS
(Home Subscriber Server, home subscriber server) 2032, other MME2033, SGW (Serving Gate Way,
Gateway) 2034, PGW (PDN Gate Way, grouped data network gateway) 2035 and PCRF (Policy and
Charging Rules Function, policy and rate functional entity) 2036 etc..Wherein, MME2031 be processing UE201 and
The control node of signaling, provides carrying and connection management between EPC203.HSS2032 is all to manage for providing some registers
Such as the function of home location register (not shown) etc, and preserves some related service features, data rates etc. and use
The dedicated information in family.All customer data can be sent by SGW2034, and PGW2035 can provide the IP of UE 201
Address distribution and other functions, PCRF2036 are strategy and the charging control strategic decision-making of business data flow and IP bearing resource
Point, it selects and provides available strategy and charging control decision with charge execution function unit (not shown) for strategy.
IP operation 204 may include internet, Intranet, IMS (IP Multimedia Subsystem, IP multimedia
System) or other IP operations etc..
Although above-mentioned be described by taking LTE system as an example, those skilled in the art should know the present invention is not only
Suitable for LTE system, be readily applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA with
And the following new network system etc., herein without limitation.
Based on above-mentioned mobile terminal hardware configuration and communications network system, each embodiment of the method for the present invention is proposed.
Embodiment 1
It is illustrated in figure 3 the flow chart of result of broadcast methods of exhibiting described in the present embodiment.The broadcasting that the present embodiment proposes
Effect methods of exhibiting includes:
S101, screen displaying content is obtained.
Specifically, the result of broadcast methods of exhibiting that the present embodiment is proposed can be applied to, screen is deformable to have flexible screen
Intelligent terminal in.Such as flexible screen mobile phone, plate and computer.The method purpose that the present embodiment is proposed is can be to broadcasting
Video during putting carries out stereoscopic effect presentation.Therefore, before carrying out effect displaying, the content shown to screen is needed
It is obtained.If the content that screen is currently shown is video content, subsequent effect displaying can be carried out.And if
If the content that screen is currently shown not is video content, the processing that any effect is shown just is not done.In the present embodiment, obtain
The content that screen is shown can be identified according to the application program that user is currently opened.
S102, when the screen show content be video content when, obtain video feature information.
Specifically, necessarily being wrapped in the content that screen is shown when user carries out video-see by smart machine at this time
Contain video content.The elements such as picture, sound and subtitle have been generally comprised in conventional video at present.User can be by wanting above
Element gets the video content to be presented.Therefore, video feature information described in the present embodiment can also be from video
It is obtained in some elements.The video feature information can combine for one or both of subtitle or sound.
S103, currently playing scene information is obtained according to the video feature information.
Specifically, having been described above, documented video feature information can be in subtitle or sound in the present embodiment
One kind or both combination.It would generally include the pass that can embody currently playing scene in subtitle for subtitle
Keyword information.Such as it " is seen fastly, the good beauty of the moon of tonight in a certain sentence lines!" in will appear the keyword of " moon ".By this
It is at night that keyword, which can analyze out current scene,.In another example " wave is too vast for a certain sentence lines!" in will appear " sea
The keyword of wave ".Can analyze out current scene by the keyword is by the sea.As can be seen that by being closed in subtitle
The extraction of keyword can analyze out scene locating for current picture.And difference can also be obtained by the combination between keyword
Scene.Such as lines " are seen, the good beauty of the moon of tonight fastly!" in, " moon " and " beauty " can be used as keyword, if individually
If being keyword with " moon ", it is merely capable of learning that current scene is that can not also know although there is the moon at night
Whether the moon can be seen.And after being combined with " beauty ", so that it may which further showing that current scene is can be on high
It is middle the sunny night sky of the moon occur.Likewise, " wave is too vast for lines described above!" in " wave " it is known that be
Seashore, if plus " vast " it is known that current scene is on the seashore that can dig billow.Profound knows
Current scene information.It is available by keyword extraction and to the difference of the combination between multiple keywords
To different scene informations.Therefore, a broadcasting scene library, the broadcasting scene can be arranged in the present embodiment in server or cloud
The corresponding relationship between a large amount of keyword and scene information is prestored in library.When carrying out keyword extraction to some subtitle
Will find with its corresponding to scene information.It, can also be to caption information and field meanwhile in long-term identification process
Scape information carries out deep learning, to be updated to scene library is played, plays scene library to improve.
For sound, broadcasting scene information is obtained mainly by way of voice recognition.For example, passing through video
Some segment in phonetic analysis go out the sound ray of wave, and then can know the scene be seashore.In another example passing through
Phonetic analysis in some segment of video goes out the sound ray of echo, and then can know that the scene is spacious environment.
Meanwhile performers and clerks can also be identified in video what is said or talked about language by voice recognition.And the language is converted into text letter
Then breath is referred to the above-mentioned mode for carrying out keyword extraction to subtitle and obtains current scene.It is played with by subtitle
Scene information is similar, and a broadcasting scene library can also be arranged in server or cloud, prestores in the broadcasting scene library big
Corresponding relationship between the sound ray and scene information of amount.It is right with its institute to find when carrying out voice recognition to some sound
The scene information answered.Meanwhile in long-term identification process, depth can also be carried out to acoustic information and scene information
It practises, to be updated to scene library is played, plays scene library to improve.
In addition, the mode that sound is combined with subtitle can also be used in the present embodiment, video can be more fully got
The characteristic information of content, so that the judgement for scene information provides sufficient foundation.
S104, the deformation that the screen is controlled according to the broadcasting scene information.
Specifically, after getting broadcasting scene information, so that it may pass through the broadcasting scene information and result of broadcast is presented.Example
Such as, for flame, so that it may carry out ripple vibration by the part in flame and feel in combustion to simulate flame.Example
Such as, a sport car traveling is on highway, so that it may carry out protrusion according to the direction that sport car travels and show.These can be by right
The shape control of flexible screen is realized.During carrying out shape control to flexible screen, it is possible to which the part of only picture needs
Stereoscopic effect is generated, and other parts are then normal plane effect.In this case, the present embodiment can be first to working as forward sight
The picture that frequency is shown is obtained, and is then identified to picture, and the partial picture for identifying a need for three-dimensional presentation is special
Sign.Shape control is carried out in conjunction with position of the scene to screen where the pictorial feature.For example, as shown in figure 4, in entire picture
In only need to carry out the highway of vehicle driving three-dimensional presentation, and other personages and object are all not necessarily to handle.By right
Picture entirety is identified, identifies the contour feature of highway, then carries out shape control in the part of the contour feature.In this way
Ensure that picture entirety will not impact.
Further, since different user is different for the hobby of video display effect, some users, which still can compare, to be partial to
Normal video pictures.Therefore, in order to meet the different demands of user, the present embodiment can be sent before video playing to user
Whether the selection information of special effect play is opened.If user has selected unlatching special effect play, begin to execute the present embodiment
The method and step proposed.If user has selected to be not turned on special effect play, it is maintained for original video display effect.
Embodiment 2
Corresponding embodiment 1, the present embodiment propose that a kind of result of broadcast shows device, and the result of broadcast shows that device includes
Processor, the processor are configured with the executable operational order of processor, to perform the following operations instruction:
It obtains screen and shows content;
When the screen shows that content is video content, video feature information is obtained;
Currently playing scene information is obtained according to the video feature information;
The deformation of the screen is controlled according to the broadcasting scene information.
Specifically, the device that the present embodiment is proposed can be applied to the deformable intelligent terminal with flexible screen of screen
In.Such as flexible screen mobile phone, plate and computer.The device purpose that the present embodiment is proposed is can be in playing process
Video carries out stereoscopic effect presentation.Therefore, before carrying out effect displaying, the content shown to screen is needed to obtain.
If the content that screen is currently shown is video content, subsequent effect displaying can be carried out.And if screen is current
If the content shown is not video content, the processing that any effect is shown just is not done.In the present embodiment, obtains screen and show
Content can be identified according to the application program that user is currently opened.
It necessarily include video in the content that screen is shown when user carries out video-see by smart machine at this time
Content.The elements such as picture, sound and subtitle have been generally comprised in conventional video at present.User can be got by the above element
The video content to be presented.Therefore, video feature information described in the present embodiment can also from video existing element
In obtained.The video feature information can combine for one or both of subtitle or sound.
It would generally include the key word information that can embody currently playing scene in subtitle for subtitle.Example
Such as " seen fastly, the good beauty of the moon of tonight in a certain sentence lines!" in will appear the keyword of " moon ".It can be divided by the keyword
It is at night that current scene, which is precipitated,.In another example " wave is too vast for a certain sentence lines!" in will appear the keyword of " wave ".
Can analyze out current scene by the keyword is by the sea.As can be seen that can by the extraction to keyword in subtitle
To analyze scene locating for current picture.And different scenes can also be obtained by the combination between keyword.Such as
Lines " are seen, the good beauty of the moon of tonight fastly!" in, " moon " and " beauty " can be used as keyword, if being individually with " moon "
If keyword, it is merely capable of learning that current scene is that night can not also know whether see although there is the moon
Understand the moon.And after being combined with " beauty ", so that it may further obtain current scene be can on high in there is the moon
The sunny night sky.Likewise, " wave is too vast for lines described above!" in " wave " it is known that be by the sea, if
In addition " vast " is it is known that current scene is on the seashore that can dig billow.Profound has known current scene
Information.It is available to arrive different fields by keyword extraction and to the difference of the combination between multiple keywords
Scape information.Therefore, a broadcasting scene library can be arranged in the present embodiment in server or cloud, prestore in the broadcasting scene library
Corresponding relationship between a large amount of keyword and scene information.It will be found when carrying out keyword extraction to some subtitle
With the scene information corresponding to it.Meanwhile in long-term identification process, caption information and scene information can also be carried out
Deep learning plays scene library to be updated to scene library is played to improve.
For sound, broadcasting scene information is obtained mainly by way of voice recognition.For example, passing through video
Some segment in phonetic analysis go out the sound ray of wave, and then can know the scene be seashore.In another example passing through
Phonetic analysis in some segment of video goes out the sound ray of echo, and then can know that the scene is spacious environment.
Meanwhile performers and clerks can also be identified in video what is said or talked about language by voice recognition.And the language is converted into text letter
Then breath is referred to the above-mentioned mode for carrying out keyword extraction to subtitle and obtains current scene.It is played with by subtitle
Scene information is similar, and a broadcasting scene library can also be arranged in server or cloud, prestores in the broadcasting scene library big
Corresponding relationship between the sound ray and scene information of amount.It is right with its institute to find when carrying out voice recognition to some sound
The scene information answered.Meanwhile in long-term identification process, depth can also be carried out to acoustic information and scene information
It practises, to be updated to scene library is played, plays scene library to improve.
In addition, the mode that sound is combined with subtitle can also be used in the present embodiment, video can be more fully got
The characteristic information of content, so that the judgement for scene information provides sufficient foundation.
After getting broadcasting scene information, so that it may pass through the broadcasting scene information and result of broadcast is presented.For example, for
Flame, so that it may ripple vibration be carried out by the part in flame and felt in combustion to simulate flame.For example, a race
Vehicle travels on highway, so that it may carry out protrusion according to the direction that sport car travels and show.These can be by flexible screen
Shape control realize.During carrying out shape control to flexible screen, it is possible to which the part of only picture needs to generate vertical
Body effect, and other parts are then normal plane effect.In this case, the present embodiment can first show current video
Picture obtained, then picture is identified, identifies a need for the partial picture feature of three-dimensional presentation.In conjunction with
Scene carries out shape control to the position of screen where the pictorial feature.For example, as shown in figure 4, only being needed in entire picture
Three-dimensional presentation is carried out to the highway of vehicle driving, and other personages and object are all not necessarily to handle.By to picture entirety
It is identified, identifies the contour feature of highway, then carry out shape control in the part of the contour feature.It can thus protect
Card picture entirety will not impact.
Further, since different user is different for the hobby of video display effect, some users, which still can compare, to be partial to
Normal video pictures.Therefore, in order to meet the different demands of user, the present embodiment can be sent before video playing to user
Whether the selection information of special effect play is opened.If user has selected unlatching special effect play, begin to execute the present embodiment
Itd is proposed video display effect shows process.If user has selected to be not turned on special effect play, it is maintained for original video
Result of broadcast.
Embodiment 3
It is illustrated in figure 5 a kind of structural schematic diagram of mobile terminal provided in this embodiment.Mobile terminal shown in fig. 5
500 include: at least one processor 501, memory 502, at least one network interface 504 and other users interface 503.It is mobile
Various components in terminal 500 are coupled by bus system 505.It is understood that bus system 505 is for realizing these groups
Connection communication between part.Bus system 505 further includes power bus, control bus and state in addition to including data/address bus
Signal bus.But for the sake of clear explanation, various buses are all designated as bus system 505 in Fig. 5.
Wherein, user interface 503 may include display, keyboard or pointing device (for example, mouse, trace ball
(trackball), touch-sensitive plate or touch screen etc..
It is appreciated that the memory 502 in the present embodiment embodiment can be volatile memory or non-volatile memories
Device, or may include both volatile and non-volatile memories.Wherein, nonvolatile memory can be read-only memory
(Read-OnlyMemory, ROM), programmable read only memory (ProgrammableROM, PROM), erasable programmable are read-only
Memory (ErasablePROM, EPROM), electrically erasable programmable read-only memory (ElectricallyEPROM, EEPROM)
Or flash memory.Volatile memory can be random access memory (RandomAccessMemory, RAM), be used as external high
Speed caching.By exemplary but be not restricted explanation, the RAM of many forms is available, such as static random access memory
(StaticRAM, SRAM), dynamic random access memory (DynamicRAM, DRAM), Synchronous Dynamic Random Access Memory
(SynchronousDRAM, SDRAM), double data speed synchronous dynamic RAM (DoubleDataRate
SDRAM, DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronized links
Dynamic random access memory (SynchlinkDRAM, SLDRAM) and direct rambus random access memory
(DirectRambusRAM, DRRAM).Memory 502 described herein is intended to include but is not limited to these to be suitble to any other
The memory of type.
In some embodiments, memory 502 stores following element, and unit or data structure can be performed, or
Their subset of person or their superset: operating system 5021 and application program 5022.
Wherein, operating system 5021 include various system programs, such as ccf layer, core library layer, driving layer etc., are used for
Realize various basic businesses and the hardware based task of processing.Application program 5022 includes various application programs, such as media
Player (MediaPlayer), browser (Browser) etc., for realizing various applied business.Realize the present embodiment embodiment
The program of method may be embodied in application program 5022.
In the present embodiment embodiment, by the program or instruction for calling memory 502 to store, answered specifically, can be
With the program or instruction stored in program 5022, processor 501 is for executing method and step provided by each method embodiment, example
Such as include:
It obtains screen and shows content;
When the screen shows that content is video content, video feature information is obtained;
Currently playing scene information is obtained according to the video feature information;
The deformation of the screen is controlled according to the broadcasting scene information.
By obtaining the video content shown on screen, video feature information is then obtained by video content again, often
The corresponding broadcasting scene of a video feature information, the broadcasting scene according to corresponding to it control the deformation of screen, from
And the stereoscopic effect under special scenes environment is shown to user, video is watched for user, and better sensory experience is provided.
The method that above-mentioned the present embodiment discloses can be applied in processor 501, or be realized by processor 501.Processing
Device 501 may be a kind of IC chip, the processing capacity with signal.During realization, each step of the above method
It can be completed by the integrated logic circuit of the hardware in processor 501 or the instruction of software form.Above-mentioned processor 501
It can be general processor, digital signal processor (DigitalSignalProcessor, DSP), specific integrated circuit
(ApplicationSpecific IntegratedCircuit, ASIC), ready-made programmable gate array
(FieldProgrammableGateArray, FPGA) either other programmable logic device, discrete gate or transistor logic
Device, discrete hardware components.It may be implemented or execute disclosed each method, step and the logical box in the present embodiment embodiment
Figure.General processor can be microprocessor or the processor is also possible to any conventional processor etc..In conjunction with this implementation
The step of method disclosed in example embodiment, can be embodied directly in hardware decoding processor and execute completion, or be handled with decoding
Hardware and software unit combination in device execute completion.Software unit can be located at random access memory, flash memory, read-only memory,
In the storage medium of this fields such as programmable read only memory or electrically erasable programmable memory, register maturation.This is deposited
Storage media is located at memory 502, and processor 501 reads the information in memory 502, and the step of the above method is completed in conjunction with its hardware
Suddenly.
It is understood that embodiments described herein can with hardware, software, firmware, middleware, microcode or its
Combination is to realize.For hardware realization, processing unit be may be implemented in one or more specific integrated circuit (Application
SpecificIntegratedCircuits, ASIC), digital signal processor (DigitalSignalProcessing, DSP),
Digital signal processing appts (DSPDevice, DSPD), programmable logic device (ProgrammableLogicDevice, PLD),
Field programmable gate array (Field-ProgrammableGateArray, FPGA), general processor, controller, microcontroller
In device, microprocessor, other electronic units for executing herein described function or combinations thereof.
For software implementations, the techniques described herein can be realized by executing the unit of function described herein.Software generation
Code is storable in memory and is executed by processor.Memory can in the processor or portion realizes outside the processor.
Embodiment 4
The embodiment of the present invention provides a kind of computer readable storage medium, and the computer-readable recording medium storage calculates
Machine instruction, the computer instruction make the computer execute method provided by each method embodiment, for example,
It obtains screen and shows content;
When the screen shows that content is video content, video feature information is obtained;
Currently playing scene information is obtained according to the video feature information;
The deformation of the screen is controlled according to the broadcasting scene information.
By obtaining the video content shown on screen, video feature information is then obtained by video content again, often
The corresponding broadcasting scene of a video feature information, the broadcasting scene according to corresponding to it control the deformation of screen, from
And the stereoscopic effect under special scenes environment is shown to user, video is watched for user, and better sensory experience is provided.
Specifically, the present embodiment purpose is that stereoscopic effect presentation can be carried out to the video in playing process.Therefore, exist
Before carrying out effect displaying, the content shown to screen is needed to obtain.If the content that screen is currently shown is view
If frequency content, subsequent effect displaying can be carried out.And if the content that screen is currently shown not is video content,
The processing that any effect is shown just is not done.In the present embodiment, obtain what the content that screen is shown can currently be opened according to user
Application program is identified.
It necessarily include video in the content that screen is shown when user carries out video-see by smart machine at this time
Content.The elements such as picture, sound and subtitle have been generally comprised in conventional video at present.User can be got by the above element
The video content to be presented.Therefore, video feature information described in the present embodiment can also from video existing element
In obtained.The video feature information can combine for one or both of subtitle or sound.
It would generally include the key word information that can embody currently playing scene in subtitle for subtitle.Example
Such as " seen fastly, the good beauty of the moon of tonight in a certain sentence lines!" in will appear the keyword of " moon ".It can be divided by the keyword
It is at night that current scene, which is precipitated,.In another example " wave is too vast for a certain sentence lines!" in will appear the keyword of " wave ".
Can analyze out current scene by the keyword is by the sea.As can be seen that can by the extraction to keyword in subtitle
To analyze scene locating for current picture.And different scenes can also be obtained by the combination between keyword.Such as
Lines " are seen, the good beauty of the moon of tonight fastly!" in, " moon " and " beauty " can be used as keyword, if being individually with " moon "
If keyword, it is merely capable of learning that current scene is that night can not also know whether see although there is the moon
Understand the moon.And after being combined with " beauty ", so that it may further obtain current scene be can on high in there is the moon
The sunny night sky.Likewise, " wave is too vast for lines described above!" in " wave " it is known that be by the sea, if
In addition " vast " is it is known that current scene is on the seashore that can dig billow.Profound has known current scene
Information.It is available to arrive different fields by keyword extraction and to the difference of the combination between multiple keywords
Scape information.Therefore, a broadcasting scene library can be arranged in the present embodiment in server or cloud, prestore in the broadcasting scene library
Corresponding relationship between a large amount of keyword and scene information.It will be found when carrying out keyword extraction to some subtitle
With the scene information corresponding to it.Meanwhile in long-term identification process, caption information and scene information can also be carried out
Deep learning plays scene library to be updated to scene library is played to improve.
For sound, broadcasting scene information is obtained mainly by way of voice recognition.For example, passing through video
Some segment in phonetic analysis go out the sound ray of wave, and then can know the scene be seashore.In another example passing through
Phonetic analysis in some segment of video goes out the sound ray of echo, and then can know that the scene is spacious environment.
Meanwhile performers and clerks can also be identified in video what is said or talked about language by voice recognition.And the language is converted into text letter
Then breath is referred to the above-mentioned mode for carrying out keyword extraction to subtitle and obtains current scene.It is played with by subtitle
Scene information is similar, and a broadcasting scene library can also be arranged in server or cloud, prestores in the broadcasting scene library big
Corresponding relationship between the sound ray and scene information of amount.It is right with its institute to find when carrying out voice recognition to some sound
The scene information answered.Meanwhile in long-term identification process, depth can also be carried out to acoustic information and scene information
It practises, to be updated to scene library is played, plays scene library to improve.
In addition, the mode that sound is combined with subtitle can also be used in the present embodiment, video can be more fully got
The characteristic information of content, so that the judgement for scene information provides sufficient foundation.
After getting broadcasting scene information, so that it may pass through the broadcasting scene information and result of broadcast is presented.For example, for
Flame, so that it may ripple vibration be carried out by the part in flame and felt in combustion to simulate flame.For example, a race
Vehicle travels on highway, so that it may carry out protrusion according to the direction that sport car travels and show.These can be by flexible screen
Shape control realize.During carrying out shape control to flexible screen, it is possible to which the part of only picture needs to generate vertical
Body effect, and other parts are then normal plane effect.In this case, the present embodiment can first show current video
Picture obtained, then picture is identified, identifies a need for the partial picture feature of three-dimensional presentation.In conjunction with
Scene carries out shape control to the position of screen where the pictorial feature.For example, as shown in figure 4, only being needed in entire picture
Three-dimensional presentation is carried out to the highway of vehicle driving, and other personages and object are all not necessarily to handle.By to picture entirety
It is identified, identifies the contour feature of highway, then carry out shape control in the part of the contour feature.It can thus protect
Card picture entirety will not impact.
Further, since different user is different for the hobby of video display effect, some users, which still can compare, to be partial to
Normal video pictures.Therefore, in order to meet the different demands of user, the present embodiment can be sent before video playing to user
Whether the selection information of special effect play is opened.If user has selected unlatching special effect play, begin to execute the present embodiment
Itd is proposed video display effect shows process.If user has selected to be not turned on special effect play, it is maintained for original video
Result of broadcast.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
The scope of the present invention.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In embodiment provided herein, it should be understood that disclosed device and method can pass through others
Mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit, only
A kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or components can combine or
Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual
Between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication link of device or unit
It connects, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product
It is stored in a computer readable storage medium.Based on this understanding, the technical solution of the embodiment of the present invention is substantially
The part of the part that contributes to existing technology or the technical solution can embody in the form of software products in other words
Come, which is stored in a storage medium, including some instructions are used so that a computer equipment (can
To be personal computer, server or the network equipment etc.) execute all or part of each embodiment the method for the present invention
Step.And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, ROM, RAM, magnetic or disk etc. are various can store program
The medium of code.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal (can be mobile phone, computer, service
Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific
Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art
Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much
Form, all of these belong to the protection of the present invention.
Claims (10)
1. a kind of result of broadcast methods of exhibiting, which is characterized in that the described method includes:
It obtains screen and shows content;
When the screen shows that content is video content, video feature information is obtained;
Currently playing scene information is obtained according to the video feature information;
The deformation of the screen is controlled according to the broadcasting scene information.
2. result of broadcast methods of exhibiting according to claim 1, which is characterized in that described when the screen shows that content is
When video content, video feature information is obtained, the specific mistake of currently playing scene information is obtained according to the video feature information
Journey are as follows:
When the screen shows that content is video content, the caption information of the video content is obtained;
Keyword extraction is carried out to the caption information and obtains key word information;
Broadcasting scene information corresponding with the key word information is obtained according to the key word information.
3. result of broadcast methods of exhibiting according to claim 2, which is characterized in that believed described according to the video features
Before breath obtains currently playing scene information, the method also includes:
Broadcasting scene library is preestablished, described play includes corresponding between key word information and broadcasting scene information in scene library
Relationship.
4. result of broadcast methods of exhibiting according to claim 1, which is characterized in that described when the screen shows that content is
When video content, video feature information is obtained, the specific mistake of currently playing scene information is obtained according to the video feature information
Journey are as follows:
When the screen shows that content is video content, the voice messaging of the video content is obtained;
Speech recognition is carried out to the voice messaging and obtains voice characteristics information;
Broadcasting scene information corresponding with the voice characteristics information is obtained according to the voice characteristics information.
5. result of broadcast methods of exhibiting according to claim 4, which is characterized in that believed described according to the video features
Before breath obtains currently playing scene information, the method also includes:
Broadcasting scene library is preestablished, it is described to play in scene library including pair between voice characteristics information and broadcasting scene information
It should be related to.
6. the result of broadcast methods of exhibiting according to claim 3 or 5, which is characterized in that the method also includes:
To the video feature information and play scene information progress deep learning acquisition learning outcome;
The broadcasting scene library is updated according to the learning outcome.
7. result of broadcast methods of exhibiting according to claim 1, which is characterized in that described according to the broadcasting scene information
Control the detailed process of the deformation of the screen are as follows:
Obtain the picture that current video content is shown;
The picture is identified, pictorial feature information is obtained;
Shape control is carried out according to position of the broadcasting scene information to screen locating for the pictorial feature information.
8. result of broadcast methods of exhibiting according to claim 1, which is characterized in that the acquisition screen show content it
Before, the method also includes:
The selection information for whether opening special effect play is sent to user;
When user selects to open special effect play, obtains screen and show content.
9. a kind of result of broadcast shows device, which is characterized in that the result of broadcast shows that device includes processor, the processing
Device is configured with the executable operational order of processor, to execute such as the step of any one of claim 1 to 8 the method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage computer instruction, the computer
Instruction executes the computer such as the step of any one of claim 1 to 8 the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811642138.6A CN109873902A (en) | 2018-12-29 | 2018-12-29 | Result of broadcast methods of exhibiting, device and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811642138.6A CN109873902A (en) | 2018-12-29 | 2018-12-29 | Result of broadcast methods of exhibiting, device and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109873902A true CN109873902A (en) | 2019-06-11 |
Family
ID=66917347
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811642138.6A Pending CN109873902A (en) | 2018-12-29 | 2018-12-29 | Result of broadcast methods of exhibiting, device and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109873902A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112637662A (en) * | 2020-12-28 | 2021-04-09 | 北京小米移动软件有限公司 | Method, device and storage medium for generating vibration sense of media image |
CN115690641A (en) * | 2022-05-25 | 2023-02-03 | 中仪英斯泰克进出口有限公司 | Screen control method and system based on image display |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012159853A (en) * | 2012-04-03 | 2012-08-23 | Seiko Epson Corp | Screen |
CN103365356A (en) * | 2012-03-21 | 2013-10-23 | 三星电子株式会社 | Method and apparatus for displaying on electronic device |
CN106527728A (en) * | 2016-11-22 | 2017-03-22 | 青岛海信移动通信技术股份有限公司 | Mobile terminal and display control method for mobile terminal |
CN106662801A (en) * | 2014-07-15 | 2017-05-10 | Cj Cgv 株式会社 | Variable screen system |
CN106774817A (en) * | 2015-11-20 | 2017-05-31 | 意美森公司 | The flexible apparatus of tactile activation |
CN107820027A (en) * | 2017-11-02 | 2018-03-20 | 北京奇虎科技有限公司 | Video personage dresss up method, apparatus, computing device and computer-readable storage medium |
CN108228047A (en) * | 2017-11-29 | 2018-06-29 | 努比亚技术有限公司 | A kind of video playing control method, terminal and computer readable storage medium |
-
2018
- 2018-12-29 CN CN201811642138.6A patent/CN109873902A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103365356A (en) * | 2012-03-21 | 2013-10-23 | 三星电子株式会社 | Method and apparatus for displaying on electronic device |
JP2012159853A (en) * | 2012-04-03 | 2012-08-23 | Seiko Epson Corp | Screen |
CN106662801A (en) * | 2014-07-15 | 2017-05-10 | Cj Cgv 株式会社 | Variable screen system |
CN106774817A (en) * | 2015-11-20 | 2017-05-31 | 意美森公司 | The flexible apparatus of tactile activation |
CN106527728A (en) * | 2016-11-22 | 2017-03-22 | 青岛海信移动通信技术股份有限公司 | Mobile terminal and display control method for mobile terminal |
CN107820027A (en) * | 2017-11-02 | 2018-03-20 | 北京奇虎科技有限公司 | Video personage dresss up method, apparatus, computing device and computer-readable storage medium |
CN108228047A (en) * | 2017-11-29 | 2018-06-29 | 努比亚技术有限公司 | A kind of video playing control method, terminal and computer readable storage medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112637662A (en) * | 2020-12-28 | 2021-04-09 | 北京小米移动软件有限公司 | Method, device and storage medium for generating vibration sense of media image |
CN115690641A (en) * | 2022-05-25 | 2023-02-03 | 中仪英斯泰克进出口有限公司 | Screen control method and system based on image display |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107229402A (en) | Dynamic screenshotss method, device and the readable storage medium storing program for executing of terminal | |
CN108076292A (en) | Image pickup method, mobile terminal and storage medium | |
CN108279823A (en) | A kind of flexible screen display methods, terminal and computer readable storage medium | |
CN108259781A (en) | image synthesizing method, terminal and computer readable storage medium | |
CN107493497A (en) | A kind of video broadcasting method, terminal and computer-readable recording medium | |
CN109743504A (en) | A kind of auxiliary photo-taking method, mobile terminal and storage medium | |
CN108200269A (en) | Display screen control management method, terminal and computer readable storage medium | |
CN108259988A (en) | A kind of video playing control method, terminal and computer readable storage medium | |
CN107463255A (en) | A kind of video broadcasting method, terminal and computer-readable recording medium | |
CN108055463A (en) | Image processing method, terminal and storage medium | |
CN108900780A (en) | A kind of screen light compensation method, mobile terminal and storage medium | |
CN108418948A (en) | A kind of based reminding method, mobile terminal and computer storage media | |
CN108197554A (en) | A kind of camera starts method, mobile terminal and computer readable storage medium | |
CN108241752A (en) | Photo display methods, mobile terminal and computer readable storage medium | |
CN109660973A (en) | Bluetooth control method, mobile terminal and storage medium | |
CN108197206A (en) | Expression packet generation method, mobile terminal and computer readable storage medium | |
CN108282554A (en) | A kind of CCD camera assembly and smart mobile phone | |
CN109672822A (en) | A kind of method for processing video frequency of mobile terminal, mobile terminal and storage medium | |
CN108762631A (en) | A kind of method for controlling mobile terminal, mobile terminal and computer readable storage medium | |
CN108200332A (en) | A kind of pattern splicing method, mobile terminal and computer readable storage medium | |
CN108282578A (en) | Shoot based reminding method, mobile terminal and computer readable storage medium | |
CN109889303A (en) | Video play mode switching method, device and computer readable storage medium | |
CN109729267A (en) | Filter selection method, mobile terminal and computer readable storage medium | |
CN109710159A (en) | A kind of flexible screen response method, equipment and computer readable storage medium | |
CN109276881A (en) | A kind of game control method, equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190611 |
|
RJ01 | Rejection of invention patent application after publication |