CN108170350A - Realize method, terminal and the computer readable storage medium of Digital Zoom - Google Patents
Realize method, terminal and the computer readable storage medium of Digital Zoom Download PDFInfo
- Publication number
- CN108170350A CN108170350A CN201711463885.9A CN201711463885A CN108170350A CN 108170350 A CN108170350 A CN 108170350A CN 201711463885 A CN201711463885 A CN 201711463885A CN 108170350 A CN108170350 A CN 108170350A
- Authority
- CN
- China
- Prior art keywords
- touch point
- gesture
- user
- opengl
- scaling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04847—Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04806—Zoom, i.e. interaction techniques or interactors for controlling the zooming operation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04808—Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses a kind of methods, terminal and computer readable storage medium for realizing Digital Zoom, are related to technical field of image processing, and the method for realizing Digital Zoom includes:It detects user and scales gesture;Scaling center and zoom factor are determined according to the scaling gesture;Apex coordinate of the camera preview region on open figure voice openGL is determined according to the scaling center and zoom factor;OpenGL is called to draw scaling pictures according to the apex coordinate.Present invention introduces GPU processing, and apex coordinate of the camera preview region on openGL is determined according to scaling center and zoom factor;It recalls openGL and scaling pictures is drawn according to apex coordinate.The present invention realizes that process is complicated, time-consuming caused by avoiding excessively interacting with HAL layers during realizing Digital Zoom, can realize more smooth, diversified Digital Zoom function, alleviate HAL layers of workload, promote user experience.
Description
Technical field
It can the present invention relates to image processing field more particularly to a kind of method, terminal and computer for realizing Digital Zoom
Read storage medium.
Background technology
At present, with the continuous development of mechanics of communication, built-in terminal product, especially smart mobile phone have become people
A part indispensable in daily life, at the same time, in these electronic products, camera-enabled is gradually sent out
Open up one of configuration most basic for its.
The picture of the different depth of field is shot in order to facilitate user, designer can only be come by means of the Digital Zoom function of camera
Realize zoom, i.e. zoom functions.For mobile phone camera, at camera preview interface, user passes through two fingers current scale preview
Picture, it can be seen that preview screen can be amplified, and be enlarged similar to scenery.
Wherein, the realization principle of Digital Zoom is:By the processor chips of mobile terminal (such as mobile phone), in image
Each elemental area increase, that is, using processor to have pixel periphery color judge, and according to periphery
The pixel that the insertion of color situation is added in through particular algorithm, so as to achieve the purpose that enlarged drawing.
Traditional zoom realizations are to rely on the support of hardware abstraction layer (Hardware Abstract Layer, HAL),
After a kind of terminal detects the scaling gesture of user's forefinger and middle finger, it can be converted to according to gesture by certain needs desired
The coefficient of amplification, so be converted to a Rect region (this region on camera API1 interfaces be typically using preview center as
Amplification center, is adjusted by amplification coefficient, and usual non-step-less adjustment on API2 interfaces is adjusted by self defined area), if
It puts to bottom, bottom cuts original preview data, amplifies and be then delivered to upper strata, shown for preview according to coefficient or region
Show.This process needs to interact with bottom, more takes, and dumb.
Invention content
It is a primary object of the present invention to propose a kind of method, terminal and computer-readable storage medium for realizing Digital Zoom
Matter, it is intended to solve camera zoom functions realize during need repeatedly interact with HAL layer and cause zoom functions realization process complexity,
The technical issues of taking.
To achieve the above object, one aspect of the present invention provides a kind of method for realizing Digital Zoom, described to realize digital become
Burnt method includes:
It detects user and scales gesture;
Scaling center and zoom factor are determined according to the scaling gesture;
Top of the camera preview region on open figure voice openGL is determined according to the scaling center and zoom factor
Point coordinates;
OpenGL is called to draw scaling pictures according to the apex coordinate.
Further, the detection user scales gesture and includes:
Detect the first touch point and the second touch point;Wherein, first touch point is deposited simultaneously with second touch point
;
Judge the variation of the distance between first touch point and second touch point within default [T1, the T2] period
Situation;
Determine that user scales gesture according to the distance change situation.
Further, the distance change situation includes:Gradually increase, distance are gradually reduced distance;
It is described to determine that user scales gesture and includes according to the distance change situation:
When the distance gradually increases, determine that user scales gesture as amplification;
When the distance is gradually reduced, determine that user scales gesture to reduce.
Further, it is described to determine that scaling center and zoom factor include according to the scaling gesture:
When user scales gesture as amplification, the first touch point of T1 moment and the midpoint of the second touch point line are determined as
Scaling center;By the line segment length of the first touch point of T1 moment and the second touch point and the first touch point of T2 moment and the second touch-control
The ratio of the line segment length of point is as zoom factor;
When user scales gesture to reduce, the first touch point of T2 moment and the midpoint of the second touch point line are determined as
Scaling center;By the line segment length of the first touch point of T1 moment and the second touch point and the first touch point of T2 moment and the second touch-control
The ratio of the line segment length of point is as zoom factor.
Further, it is described to determine camera preview region in open figure voice according to the scaling center and zoom factor
Apex coordinate on openGL includes:
The camera preview region, the zoom factor are mapped to according to default mapping relations based on the scaling center
Apex coordinate of the camera preview region on openGL is obtained on openGL.
Further, the calling openGL draws scaling pictures according to the apex coordinate and includes:
When user scales gesture as amplification, openGL is called by the figure of the apex coordinate region on the openGL
As amplification, it is plotted as the image in the camera preview region;
When user scales gesture to reduce, openGL is called by the figure of the apex coordinate region on the openGL
As reducing, it is plotted as the image in the camera preview region.
Further, before the detection user scales gesture, the method further includes:
Detect user's slide;
Camera preview region is determined according to user's slide.
Further, detection user's slide includes:
The first slip information on the first frame of detection terminal;
The second slip information on the second frame of detection terminal;
It is described to determine that camera preview region includes according to user's slide:
Camera preview region is determined as according to first slip information and second slip information;
Wherein, first slip information is slided including the first slip origin coordinates and first and terminates coordinate, and described second
Slip information includes second and slides origin coordinates and the second slip termination coordinate;
The camera preview region is to slide origin coordinates, the first slip termination with predetermined manner connection described first
Coordinate, the second slip origin coordinates and described second, which slide, terminates the region that coordinate is formed;
The predetermined manner includes at least one of straight line, camber line, broken line.
Another aspect of the present invention also provides a kind of terminal for realizing Digital Zoom, including:It memory, processor and is stored in
On the memory and can run on the processor realization Digital Zoom program, it is described realize Digital Zoom program
Each step of the method for the realization Digital Zoom as described in any of the above-described is realized when being performed by the processor.
Another aspect of the present invention also provides a kind of computer readable storage medium, the computer-readable recording medium storage
There are one or multiple programs, one or more of programs can be performed by one or more processor, above-mentioned to realize
Each step of the method for any realization Digital Zoom.
Method, terminal and the computer readable storage medium provided by the invention for realizing Digital Zoom, introduces GPU processing,
Apex coordinate of the camera preview region on openGL is determined according to scaling center and zoom factor;OpenGL is recalled according to top
Point coordinates draws scaling pictures.The present invention realizes caused by avoiding excessively interacting with HAL layers during realizing Digital Zoom
Process is complicated, time-consuming, can realize more smooth, diversified Digital Zoom function, alleviate HAL layers of workload, is promoted
User experience.
Description of the drawings
The hardware architecture diagram of Fig. 1 mobile terminals of each embodiment to realize the present invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is a kind of flow chart for the method for realizing Digital Zoom provided in an embodiment of the present invention;
Fig. 4 is the flow chart of another method for realizing Digital Zoom provided in an embodiment of the present invention;
Fig. 5 a~5d are a kind of selection camera preview area schematic provided in an embodiment of the present invention;
Fig. 6 is another selection camera preview area schematic provided in an embodiment of the present invention;
Fig. 7 is a kind of structure diagram for the terminal for realizing Digital Zoom provided in an embodiment of the present invention;
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In subsequent description, using for representing that the suffix of such as " module ", " component " or " unit " of element is only
Be conducive to the explanation of the present invention, itself there is no a specific meaning.Therefore, " module ", " component " or " unit " can mix
Ground uses.
Terminal can be implemented in a variety of manners.For example, terminal described in the present invention can include such as mobile phone, tablet
Computer, laptop, palm PC, personal digital assistant (Personal Digital Assistant, PDA), portable
The shiftings such as media player (Portable Media Player, PMP), navigation terminal, wearable device, Intelligent bracelet, pedometer
The dynamic fixed terminals such as terminal and number TV, desktop computer.
It will be illustrated by taking mobile terminal as an example in subsequent descriptions, it will be appreciated by those skilled in the art that in addition to special
For moving except the element of purpose, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Referring to Fig. 1, a kind of hardware architecture diagram of its mobile terminal of each embodiment to realize the present invention, the shifting
Dynamic terminal 100 can include:RF (Radio Frequency, radio frequency) unit 101, WiFi module 102, audio output unit
103rd, A/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit
108th, the components such as memory 109, processor 110 and power supply 111.It will be understood by those skilled in the art that shown in Fig. 1
Mobile terminal structure does not form the restriction to mobile terminal, and mobile terminal can be included than illustrating more or fewer components,
Either combine certain components or different components arrangement.
The all parts of mobile terminal are specifically introduced with reference to Fig. 1:
Radio frequency unit 101 can be used for receive and send messages or communication process in, signal sends and receivees, specifically, by base station
Downlink information receive after, handled to processor 110;In addition, the data of uplink are sent to base station.In general, radio frequency unit 101
Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, it penetrates
Frequency unit 101 can also communicate with network and other equipment by radio communication.Above-mentioned wireless communication can use any communication
Standard or agreement, including but not limited to GSM (Global System of Mobile communication, global system for mobile telecommunications
System), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (Code
Division Multiple Access 2000, CDMA 2000), WCDMA (Wideband Code Division
Multiple Access, wideband code division multiple access), TD-SCDMA (Time Division-Synchronous Code
Division Multiple Access, TD SDMA), FDD-LTE (Frequency Division
Duplexing-Long Term Evolution, frequency division duplex long term evolution) and TDD-LTE (Time Division
Duplexing-Long Term Evolution, time division duplex long term evolution) etc..
WiFi belongs to short range wireless transmission technology, and mobile terminal can help user to receive and dispatch electricity by WiFi module 102
Sub- mail, browsing webpage and access streaming video etc., it has provided wireless broadband internet to the user and has accessed.Although Fig. 1 shows
Go out WiFi module 102, but it is understood that, and must be configured into for mobile terminal is not belonging to, it completely can be according to need
It to be omitted in the range for the essence for not changing invention.
Audio output unit 103 can be in call signal reception pattern, call mode, record mould in mobile terminal 100
Formula, speech recognition mode, broadcast reception mode when under isotypes, it is that radio frequency unit 101 or WiFi module 102 are received or
The audio data stored in memory 109 is converted into audio signal and exports as sound.Moreover, audio output unit 103
The relevant audio output of specific function performed with mobile terminal 100 can also be provided (for example, call signal receives sound, disappears
Breath receives sound etc.).Audio output unit 103 can include loud speaker, buzzer etc..
A/V input units 104 are used to receive audio or video signal.A/V input units 104 can include graphics processor
(Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode
Or the static images or the image data of video obtained in image capture mode by image capture terminal (such as camera) carry out
Reason.Treated, and picture frame may be displayed on display unit 106.Through graphics processor 1041, treated that picture frame can be deposited
Storage is sent in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.Mike
Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042
Quiet down sound (audio data), and can be audio data by such acoustic processing.Audio that treated (voice) data can
To be converted to the form output that mobile communication base station can be sent to via radio frequency unit 101 in the case of telephone calling model.
Microphone 1042 can implement various types of noises elimination (or inhibition) algorithms and send and receive sound to eliminate (or inhibition)
The noise generated during frequency signal or interference.
Mobile terminal 100 further includes at least one sensor 105, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein, ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 1061, and proximity sensor can close when mobile terminal 100 is moved in one's ear
Display panel 1061 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axis) size of acceleration, size and the direction of gravity are can detect that when static, can be used to identify the application of mobile phone posture
(such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.;
The fingerprint sensor that can also configure as mobile phone, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer,
The other sensors such as hygrometer, thermometer, infrared ray sensor, details are not described herein.
Display unit 106 is used to show by information input by user or be supplied to the information of user.Display unit 106 can wrap
Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode may be used
Display panel 1061 is configured in forms such as (Organic Light-Emitting Diode, OLED).
User input unit 107 can be used for receiving the number inputted or character information and generation and the use of mobile terminal
The key signals input that family is set and function control is related.Specifically, user input unit 107 may include touch panel 1071 with
And other input equipments 1072.Touch panel 1071, also referred to as touch screen collect user on it or neighbouring touch operation
(for example user uses any suitable objects such as finger, stylus or attachment on touch panel 1071 or in touch panel 1071
Neighbouring operation), and corresponding connection terminal is driven according to preset formula.Touch panel 1071 may include touch detection
Two parts of terminal and touch controller.Wherein, the touch orientation of touch detection terminal detection user, and detect touch operation band
The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detection terminal, and by it
Contact coordinate is converted into, then gives processor 110, and the order that processor 110 is sent can be received and performed.It in addition, can
To realize touch panel 1071 using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves.In addition to touch panel
1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 can wrap
It includes but is not limited to physical keyboard, in function key (such as volume control button, switch key etc.), trace ball, mouse, operating lever etc.
It is one or more, do not limit herein specifically.
Further, touch panel 1071 can cover display panel 1061, when touch panel 1071 detect on it or
After neighbouring touch operation, processor 110 is sent to determine the type of touch event, is followed by subsequent processing device 110 according to touch thing
The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, touch panel 1071 and display panel
1061 be the component independent as two to realize the function that outputs and inputs of mobile terminal, but in certain embodiments, it can
The function that outputs and inputs of mobile terminal is realized so that touch panel 1071 and display panel 1061 is integrated, is not done herein specifically
It limits.
Interface unit 108 be used as at least one exterior terminal connect with mobile terminal 100 can by interface.For example,
Exterior terminal can include wired or wireless head-band earphone port, external power supply (or battery charger) port, wired or nothing
Line data port, memory card port, the port for terminal of the connection with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from exterior terminal (for example, number
It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in mobile terminal 100 or can be with
For transmitting data between mobile terminal 100 and exterior terminal.
Memory 109 can be used for storage software program and various data.Memory 109 can mainly include storing program area
And storage data field, wherein, storing program area can storage program area, application program (such as the sound needed at least one function
Sound playing function, image player function etc.) etc.;Storage data field can store according to mobile phone use created data (such as
Audio data, phone directory etc.) etc..In addition, memory 109 can include high-speed random access memory, can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of mobile terminal, utilizes each of various interfaces and the entire mobile terminal of connection
A part is stored in storage by running or performing the software program being stored in memory 109 and/or module and call
Data in device 109 perform the various functions of mobile terminal and processing data, so as to carry out integral monitoring to mobile terminal.Place
Reason device 110 may include one or more processing units;Preferably, processor 110 can integrate application processor and modulatedemodulate is mediated
Device is managed, wherein, the main processing operation system of application processor, user interface and application program etc., modem processor is main
Processing wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Mobile terminal 100 can also include the power supply 111 (such as battery) powered to all parts, it is preferred that power supply 111
Can be logically contiguous by power-supply management system and processor 110, so as to realize management charging by power-supply management system, put
The functions such as electricity and power managed.
Although Fig. 1 is not shown, mobile terminal 100 can also be including bluetooth module etc., and details are not described herein.
For the ease of understanding the embodiment of the present invention, below to the communications network system that is based on of mobile terminal of the present invention into
Row description.
Referring to Fig. 2, Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention, the communication network system
The LTE system united as universal mobile communications technology, the LTE system include the UE (User Equipment, the use that communicate connection successively
Family equipment) 201, E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, evolved UMTS lands
Ground wireless access network) 202, EPC (Evolved Packet Core, evolved packet-based core networks) 203 and operator IP operation
204。
Specifically, UE201 can be above-mentioned terminal 100, and details are not described herein again.
E-UTRAN202 includes eNodeB2021 and other eNodeB2022 etc..Wherein, eNodeB2021 can be by returning
Journey (backhaul) (such as X2 interface) is connect with other eNodeB2022, and eNodeB2021 is connected to EPC203,
ENodeB2021 can provide the access of UE201 to EPC203.
EPC203 can include MME (Mobility Management Entity, mobility management entity) 2031, HSS
(Home Subscriber Server, home subscriber server) 2032, other MME2033, SGW (Serving Gate Way,
Gateway) 2034, PGW (PDN Gate Way, grouped data network gateway) 2035 and PCRF (Policy and
Charging Rules Function, policy and rate functional entity) 2036 etc..Wherein, MME2031 be processing UE201 and
The control node of signaling, provides carrying and connection management between EPC203.HSS2032 is all to manage for providing some registers
Such as the function of home location register (not shown) etc, and some are preserved in relation to use such as service features, data rates
The dedicated information in family.All customer data can be sent by SGW2034, and PGW2035 can provide the IP of UE 201
Address is distributed and other functions, and PCRF2036 is business data flow and the strategy of IP bearing resources and charging control strategic decision-making
Point, it selects and provides available strategy and charging control decision with charge execution function unit (not shown) for strategy.
IP operation 204 can include internet, Intranet, IMS (IP Multimedia Subsystem, IP multimedia
System) or other IP operations etc..
Although above-mentioned be described by taking LTE system as an example, those skilled in the art it is to be understood that the present invention not only
Suitable for LTE system, be readily applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA with
And following new network system etc., it does not limit herein.
Based on above-mentioned mobile terminal hardware configuration and communications network system, each embodiment of the method for the present invention is proposed.
Fig. 3 is a kind of method for realizing Digital Zoom that the first of the invention embodiment provides, the present embodiment using terminal as
Executive agent illustrates, as shown in figure 3, this method includes:
S301, detection user scale gesture;
Specifically, this step includes:
S3011, the first touch point of detection and the second touch point;
Wherein, first touch point is existed simultaneously with second touch point;Illustratively, the first touch point and second
Touch point can correspond to touch-control of user's forefinger with thumb to terminal touch screen respectively.
S3012, judge within default [T1, the T2] period between first touch point and second touch point away from
From situation of change;
Wherein, the distance change situation includes:Gradually increase, distance are gradually reduced distance;
S3013, determine that user scales gesture according to the distance change situation.
Illustratively, when the distance gradually increases, determine that user scales gesture as amplification;
When the distance is gradually reduced, determine that user scales gesture to reduce.
S302, scaling center and zoom factor are determined according to the scaling gesture;
Illustratively, when user scales gesture as amplification, by the first touch point of T1 moment and the second touch point line
Midpoint is determined as scaling center;By the line segment length of the first touch point of T1 moment and the second touch point and the first touch point of T2 moment
Ratio with the line segment length of the second touch point is as zoom factor;
When user scales gesture to reduce, the first touch point of T2 moment and the midpoint of the second touch point line are determined as
Scaling center;By the line segment length of the first touch point of T1 moment and the second touch point and the first touch point of T2 moment and the second touch-control
The ratio of the line segment length of point is as zoom factor.
S303, determined camera preview region on open figure voice openGL according to the scaling center and zoom factor
Apex coordinate;
In the present embodiment, the camera preview region, the zoom factor are reflected according to default based on the scaling center
The apex coordinate that the camera preview region is obtained on openGL is penetrated in relationship map to openGL.
S304, openGL is called to draw scaling pictures according to the apex coordinate.
Illustratively, when user scales gesture as amplification, openGL is called by the apex coordinate institute on the openGL
Image magnification in region is plotted as the image in the camera preview region;
When user scales gesture to reduce, openGL is called by the figure of the apex coordinate region on the openGL
As reducing, it is plotted as the image in the camera preview region.
The method provided by the invention for realizing Digital Zoom, introduces GPU processing, is determined according to scaling center and zoom factor
Apex coordinate of the camera preview region on openGL;It recalls openGL and scaling pictures is drawn according to apex coordinate.The present invention
It realizes that process is complicated, time-consuming caused by avoiding excessively interacting with HAL layers during realizing Digital Zoom, can realize more
Smoothly, diversified Digital Zoom function alleviates HAL layers of workload, promotes user experience.
Another embodiment of the present invention also provides a kind of method for realizing Digital Zoom, as shown in figure 4, this method includes:
S401, detection user's slide;
Specifically, this step includes:
The first slip information on S4011, the first frame of detection terminal;
Wherein, first slip information is slided including the first slip origin coordinates and first and terminates coordinate, in this step,
User can perform slide by finger or by external input devices such as writing pencils on the first frame, and corresponding first slides
Dynamic information slides origin coordinates and first for the first of the slide and slides termination coordinate.Alternatively, the first slip information is is somebody's turn to do
First slip line segment of the slide on the first frame.
In addition, user can also do selection operation respectively in any two position of the first frame, then correspond to first and slide
Information is corresponding two coordinates of selection operation twice.Alternatively, the first slip information corresponds to two coordinates for selection operation twice
First formed on the first frame slides line segment.Wherein, selection operation can be single-click operation or double click operation.
The second slip information on S4012, the second frame of detection terminal;
Wherein, second slip information includes the second slip origin coordinates and the second slip terminates coordinate;
In this step, user can perform cunning by finger or by external input devices such as writing pencils on the second frame
Dynamic to operate, corresponding second slip information slides origin coordinates and second for the second of the slide and slides termination coordinate.Alternatively,
Second slip information is second slip line segment of the slide on the second frame.
In addition, user can also do selection operation respectively in any two position of the second frame, then correspond to second and slide
Information is corresponding two coordinates of selection operation twice.Alternatively, the second slip information corresponds to two coordinates for selection operation twice
Second formed on the second frame slides line segment.Wherein, selection operation can be single-click operation or double click operation.
S402, camera preview region is determined according to user's slide.
Realize that data are shown to defocused by the region of the image after original image local scale in the camera preview region.Its
In, camera preview region can be the fixed display area on terminal display screen or user passes through cunning on a display screen
The dynamic display area for operating selection.For example, this step determines phase according to first slip information and second slip information
Machine preview area;
Specifically, the camera preview region is to slide origin coordinates, described first with predetermined manner connection described first
It slides and terminates the region that coordinate, the second slip origin coordinates and the second slip termination coordinate are formed;Wherein, institute
It states predetermined manner and includes at least one of straight line, camber line, broken line.
In one embodiment:As shown in Figure 5 a, terminal 100 is equipped with display screen 3, and display screen 3 includes the first frame
31 and second frame 32, the first slip information, which includes first, to be slided origin coordinates A and first and slides and terminate coordinate B, and second slides
Information includes second and slides the slip termination coordinates of origin coordinates C and second D;
Origin coordinates A is is slided in camera preview region with predetermined manner connection first, first slip terminates coordinate B,
The second slip origin coordinates C and described second, which is slided, terminates the region that coordinate D is formed.
Wherein, predetermined manner includes at least one of straight line, camber line, broken line, wave etc. or straight line, arc
The combinations more than two ways such as line, broken line, wave;It can also be the irregular line of User Defined setting.
For example, after user carries out slide on the first frame 31 and the second frame 32 respectively, display interface can be shown
Go out line style selection dialog box as shown in Figure 5 b, have three kinds of single line type, combination line style and User Defined line style in the dialog box
Selection.
After user selects combination line style, interface display is as shown in Figure 5 c.User can select line style in linear frame.Such as with
After family selects broken line and straight line, the right combination option is clicked, then forms lower section line style example.User clicks application, then camera is pre-
Region is look to slide origin coordinates A, the first slip termination as fig 5d with the combination line style connection first in Fig. 5 c
Coordinate B, the second slip origin coordinates C and described second are slided and are terminated the region that coordinate D is formed.
In another embodiment, as shown in Figure 5 a, the first slip information includes sliding along the first of the first frame 31
Line segment AB, the second slip information include sliding line segment CD along the second of the second frame 32;
As shown in fig. 6, the camera preview region is to slide line segment AB and described second with described first to slide line segment CD
The rectangular area formed for adjacent side.
In the present embodiment, since the first frame 31 and the second frame 32 are mutually perpendicular to, it is clear that the first frame 31 and the second side
Frame 32 can also form other angles in addition to 0 °, and camera preview region is then to slide line segment AB and described the with described first
Two slide the parallelogram region that line segment CD is adjacent side formation.
The camera preview region that oneself can be selected desired by the operation user of above-mentioned steps S401 and S402, Ke Yizeng
Add the interest taken pictures.
S403, detection user scale gesture;
Specifically, this step includes:
S4011, the first touch point of detection and the second touch point.
Specifically, when user's forefinger and thumb zoom in and out gesture operation, terminal detects the first touch point (forefinger
Touch point) and the second touch point (thumb touch point);Wherein, first touch point is deposited simultaneously with second touch point
;
S4012, judge within default [T1, the T2] period between first touch point and second touch point away from
From situation of change;
Wherein, the period that user zooms in and out operation can be corresponded to by presetting [T1, the T2] period, and T1 can be user's contracting
The initial time of operation is put, T2 can be the end time of user's zoom operations,
Specifically, the position coordinates of the first touch point and the second touch point are detected in the period in real time at default [T1, T2],
The distance calculated therebetween according to the position coordinates of the first touch point and the second touch point determines the two distance change situation.Its
In, the distance change situation includes:Gradually increase, distance are gradually reduced distance.
S4013, determine that user scales gesture according to the distance change situation.
Illustratively, when the distance gradually increases, determine that user scales gesture as amplification;
When the distance is gradually reduced, determine that user scales gesture to reduce.
S404, scaling center and zoom factor are determined according to the scaling gesture;
Illustratively, when user scales gesture as amplification, by the first touch point of T1 moment and the second touch point line
Midpoint is determined as scaling center;I.e. when user scales gesture as amplification, user is started into touch-control moment forefinger and thumb line
Midpoint be determined as scaling center.When user scales gesture to reduce, the first touch point of T2 moment and the second touch point are connected
The midpoint of line is determined as scaling center;I.e. when user scales gesture to reduce, user is terminated into touch-control moment forefinger and thumb
The midpoint of line is determined as scaling center.
By the line segment length of the first touch point of T1 moment and the second touch point and the first touch point of T2 moment and the second touch-control
The ratio of the line segment length of point is as zoom factor.
S405, determined camera preview region on open figure voice openGL according to the scaling center and zoom factor
Apex coordinate;
In the present embodiment, the camera preview region, the zoom factor are reflected according to default based on the scaling center
The apex coordinate that the camera preview region is obtained on openGL is penetrated in relationship map to openGL.
Specifically, to scale central point (mPivotY, mPivotX) and zoom factor as mScale,
Mapping position of the camera preview region on openGL is fb (i, fl),
When i is even number, fl=((fl-pivotX) * scale)+pivotX;
When i is odd number, fl=((fl-pivotY) * scale)+pivotY;
Wherein, pivotX=1-mPivotY, pivotY=mPivotX, scale=mScale.
S406, openGL is called to draw scaling pictures according to the apex coordinate.
Illustratively, when user scales gesture as amplification, openGL is called by the apex coordinate institute on the openGL
Image magnification in region is plotted as the image in the camera preview region;In this fashion, camera preview region can be
The full screen display of display.
When user scales gesture to reduce, openGL is called by the figure of the apex coordinate region on the openGL
As reducing, it is plotted as the image in the camera preview region.
The method provided by the invention for realizing Digital Zoom, introduces GPU processing, is determined according to scaling center and zoom factor
Apex coordinate of the camera preview region on openGL;It recalls openGL and scaling pictures is drawn according to apex coordinate.The present invention
It realizes that process is complicated, time-consuming caused by avoiding excessively interacting with HAL layers during realizing Digital Zoom, can realize more
Smoothly, diversified Digital Zoom function alleviates HAL layers of workload, promotes user experience.
Based on above-mentioned each method embodiment, the present invention also provides a kind of terminals 7 for realizing Digital Zoom, can be specifically packets
Include such as mobile phone, tablet computer, laptop, palm PC, personal digital assistant (Personal Digital
Assistant, PDA), portable media player (Portable Media Player, PMP), navigation terminal, wearable set
The fixed terminals such as the mobile terminal and number TV, desktop computer of standby, Intelligent bracelet, pedometer etc..
As shown in fig. 7, the terminal 7 of the realization Digital Zoom includes:Memory 71, processor 72 and it is stored in the storage
On device and can run on the processor realization Digital Zoom program, it is described realize Digital Zoom program by the place
Reason device realizes following steps when performing:
It detects user and scales gesture;
Scaling center and zoom factor are determined according to the scaling gesture;
Top of the camera preview region on open figure voice openGL is determined according to the scaling center and zoom factor
Point coordinates;
OpenGL is called to draw scaling pictures according to the apex coordinate.
In a kind of specific embodiment, the detection user is scaled in gesture step, and the processor is additionally operable to perform
The program for realizing Digital Zoom, to realize following steps:
Detect the first touch point and the second touch point;Wherein, first touch point is deposited simultaneously with second touch point
;
Judge the variation of the distance between first touch point and second touch point within default [T1, the T2] period
Situation;
Determine that user scales gesture according to the distance change situation.
In a kind of specific embodiment, the distance change situation includes:Gradually increase, distance are gradually reduced distance;
Described to determine that user is scaled in gesture step according to the distance change situation, the processor is additionally operable to perform the realization number
The program of code zoom, to realize following steps:
When the distance gradually increases, determine that user scales gesture as amplification;
When the distance is gradually reduced, determine that user scales gesture to reduce.
It is described that scaling center and zoom factor step are determined according to the scaling gesture in a kind of specific embodiment
In, the processor is additionally operable to perform the program for realizing Digital Zoom, to realize following steps:
When user scales gesture as amplification, the first touch point of T1 moment and the midpoint of the second touch point line are determined as
Scaling center;By the line segment length of the first touch point of T1 moment and the second touch point and the first touch point of T2 moment and the second touch-control
The ratio of the line segment length of point is as zoom factor;
When user scales gesture to reduce, the first touch point of T2 moment and the midpoint of the second touch point line are determined as
Scaling center;By the line segment length of the first touch point of T1 moment and the second touch point and the first touch point of T2 moment and the second touch-control
The ratio of the line segment length of point is as zoom factor.
It is described to determine that camera preview region exists according to the scaling center and zoom factor in a kind of specific embodiment
In apex coordinate step on open figure voice openGL, the processor is additionally operable to perform the journey for realizing Digital Zoom
Sequence, to realize following steps:
The camera preview region, the zoom factor are mapped to according to default mapping relations based on the scaling center
Apex coordinate of the camera preview region on openGL is obtained on openGL.
In a kind of specific embodiment, the calling openGL draws scaling pictures step according to the apex coordinate
In, the processor is additionally operable to perform the program for realizing Digital Zoom, to realize following steps:
When user scales gesture as amplification, openGL is called by the figure of the apex coordinate region on the openGL
As amplification, it is plotted as the image in the camera preview region;
When user scales gesture to reduce, openGL is called by the figure of the apex coordinate region on the openGL
As reducing, it is plotted as the image in the camera preview region.
In a kind of specific embodiment, before the detection user scales gesture, the processor is additionally operable to perform institute
The program for realizing Digital Zoom is stated, to realize following steps:
Detect user's slide;
Camera preview region is determined according to user's slide.
In a kind of specific embodiment, in detection user's slide step, the processor is additionally operable to perform
The program for realizing Digital Zoom, to realize following steps:
The first slip information on the first frame of detection terminal;
The second slip information on the second frame of detection terminal;
It is described to determine that camera preview region includes according to user's slide:
Camera preview region is determined as according to first slip information and second slip information;
Wherein, first slip information is slided including the first slip origin coordinates and first and terminates coordinate, and described second
Slip information includes second and slides origin coordinates and the second slip termination coordinate;
The camera preview region is to slide origin coordinates, the first slip termination with predetermined manner connection described first
Coordinate, the second slip origin coordinates and described second, which slide, terminates the region that coordinate is formed;
The predetermined manner includes at least one of straight line, camber line, broken line.
Another aspect of the present invention also provides a kind of computer readable storage medium, the computer-readable recording medium storage
There are one or multiple programs, one or more of programs can be performed by one or more processor, it is following to realize
Step:
It detects user and scales gesture;
Scaling center and zoom factor are determined according to the scaling gesture;
Top of the camera preview region on open figure voice openGL is determined according to the scaling center and zoom factor
Point coordinates;
OpenGL is called to draw scaling pictures according to the apex coordinate.
In a kind of specific embodiment, the detection user is scaled in gesture step, and stating one or more program can
It is performed by one or more processor, to realize following steps:
Detect the first touch point and the second touch point;Wherein, first touch point is deposited simultaneously with second touch point
;
Judge the variation of the distance between first touch point and second touch point within default [T1, the T2] period
Situation;
Determine that user scales gesture according to the distance change situation.
In a kind of specific embodiment, the distance change situation includes:Gradually increase, distance are gradually reduced distance;
It is described according to the distance change situation determine user scale gesture step in, state one or more program can by one or
Multiple processors perform, to realize following steps:
When the distance gradually increases, determine that user scales gesture as amplification;
When the distance is gradually reduced, determine that user scales gesture to reduce.
It is described that scaling center and zoom factor step are determined according to the scaling gesture in a kind of specific embodiment
In, stating one or more program can be performed by one or more processor, to realize following steps:
When user scales gesture as amplification, the first touch point of T1 moment and the midpoint of the second touch point line are determined as
Scaling center;By the line segment length of the first touch point of T1 moment and the second touch point and the first touch point of T2 moment and the second touch-control
The ratio of the line segment length of point is as zoom factor;
When user scales gesture to reduce, the first touch point of T2 moment and the midpoint of the second touch point line are determined as
Scaling center;By the line segment length of the first touch point of T1 moment and the second touch point and the first touch point of T2 moment and the second touch-control
The ratio of the line segment length of point is as zoom factor.
It is described to determine that camera preview region exists according to the scaling center and zoom factor in a kind of specific embodiment
In apex coordinate step on open figure voice openGL, stating one or more program can be by one or more processor
It performs, to realize following steps:
The camera preview region, the zoom factor are mapped to according to default mapping relations based on the scaling center
Apex coordinate of the camera preview region on openGL is obtained on openGL.
In a kind of specific embodiment, the calling openGL draws scaling pictures step according to the apex coordinate
In, stating one or more program can be performed by one or more processor, to realize following steps:
When user scales gesture as amplification, openGL is called by the figure of the apex coordinate region on the openGL
As amplification, it is plotted as the image in the camera preview region;
When user scales gesture to reduce, openGL is called by the figure of the apex coordinate region on the openGL
As reducing, it is plotted as the image in the camera preview region.
In a kind of specific embodiment, before the detection user scales gesture, stating one or more program can quilt
One or more processor performs, to realize following steps:
Detect user's slide;
Camera preview region is determined according to user's slide.
In a kind of specific embodiment, in detection user's slide step, stating one or more program can
It is performed by one or more processor, to realize following steps:
The first slip information on the first frame of detection terminal;
The second slip information on the second frame of detection terminal;
It is described to determine that camera preview region includes according to user's slide:
Camera preview region is determined as according to first slip information and second slip information;
Wherein, first slip information is slided including the first slip origin coordinates and first and terminates coordinate, and described second
Slip information includes second and slides origin coordinates and the second slip termination coordinate;
The camera preview region is to slide origin coordinates, the first slip termination with predetermined manner connection described first
Coordinate, the second slip origin coordinates and described second, which slide, terminates the region that coordinate is formed;
The predetermined manner includes at least one of straight line, camber line, broken line.
Method, terminal and the computer readable storage medium provided by the invention for realizing Digital Zoom, introduces GPU processing,
Apex coordinate of the camera preview region on openGL is determined according to scaling center and zoom factor;OpenGL is recalled according to top
Point coordinates draws scaling pictures.The present invention realizes caused by avoiding excessively interacting with HAL layers during realizing Digital Zoom
Process is complicated, time-consuming, can realize more smooth, diversified Digital Zoom function, alleviate HAL layers of workload, is promoted
User experience.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row
His property includes, so that process, method, article or terminal including a series of elements not only include those elements, and
And it further includes other elements that are not explicitly listed or further includes intrinsic for this process, method, article or terminal institute
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including this
Also there are other identical elements in the process of element, method, article or terminal.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on such understanding, technical scheme of the present invention substantially in other words does the prior art
Going out the part of contribution can be embodied in the form of software product, which is stored in a storage medium
In (such as ROM/RAM, magnetic disc, CD), used including some instructions so that a station terminal (can be mobile phone, computer services
Device, air conditioner or network equipment etc.) perform method described in each embodiment of the present invention.
The embodiment of the present invention is described above in conjunction with attached drawing, but the invention is not limited in above-mentioned specific
Embodiment, above-mentioned specific embodiment is only schematical rather than restricted, those of ordinary skill in the art
Under the enlightenment of the present invention, present inventive concept and scope of the claimed protection are not being departed from, can also made very much
Form, these are belonged within the protection of the present invention.
Claims (10)
- A kind of 1. method for realizing Digital Zoom, which is characterized in that the method for realizing Digital Zoom includes:It detects user and scales gesture;Scaling center and zoom factor are determined according to the scaling gesture;Determine that vertex of the camera preview region on open figure voice openGL is sat according to the scaling center and zoom factor Mark;OpenGL is called to draw scaling pictures according to the apex coordinate.
- 2. the method according to claim 1 for realizing Digital Zoom, which is characterized in that the detection user scales gesture packet It includes:Detect the first touch point and the second touch point;Wherein, first touch point is existed simultaneously with second touch point;Judge that the distance between first touch point and second touch point change feelings within default [T1, the T2] period Condition;Determine that user scales gesture according to the distance change situation.
- 3. the method according to claim 2 for realizing Digital Zoom, which is characterized in that the distance change situation includes: Gradually increase, distance are gradually reduced distance;It is described to determine that user scales gesture and includes according to the distance change situation:When the distance gradually increases, determine that user scales gesture as amplification;When the distance is gradually reduced, determine that user scales gesture to reduce.
- 4. the method according to claim 3 for realizing Digital Zoom, which is characterized in that described true according to the scaling gesture It is reduced to put center and zoom factor includes:When user scales gesture as amplification, the first touch point of T1 moment and the midpoint of the second touch point line are determined as scaling Center;By the line segment length of the first touch point of T1 moment and the second touch point and the first touch point of T2 moment and the second touch point The ratio of line segment length is as zoom factor;When user scales gesture to reduce, the first touch point of T2 moment and the midpoint of the second touch point line are determined as scaling Center;By the line segment length of the first touch point of T1 moment and the second touch point and the first touch point of T2 moment and the second touch point The ratio of line segment length is as zoom factor.
- 5. the method according to claim 4 for realizing Digital Zoom, which is characterized in that it is described according to the scaling center and Zoom factor determines that apex coordinate of the camera preview region on open figure voice openGL includes:The camera preview region, the zoom factor are mapped to according to default mapping relations based on the scaling center Apex coordinate of the camera preview region on openGL is obtained on openGL.
- 6. the method according to claim 5 for realizing Digital Zoom, which is characterized in that the calling openGL is according to Apex coordinate is drawn scaling pictures and is included:When user scales gesture as amplification, openGL is called to put the image of the apex coordinate region on the openGL Greatly, it is plotted as the image in the camera preview region;When user scales gesture to reduce, call openGL that the image of the apex coordinate region on the openGL contracts It is small, it is plotted as the image in the camera preview region.
- 7. according to the method for any realization Digital Zooms of claim 1-6, which is characterized in that the detection user scaling Before gesture, the method further includes:Detect user's slide;Camera preview region is determined according to user's slide.
- 8. the method according to claim 7 for realizing Digital Zoom, which is characterized in that detection user's slide packet It includes:The first slip information on the first frame of detection terminal;The second slip information on the second frame of detection terminal;It is described to determine that camera preview region includes according to user's slide:Camera preview region is determined as according to first slip information and second slip information;Wherein, first slip information includes the first slip origin coordinates and first slides termination coordinate, second slip Information includes second and slides origin coordinates and the second slip termination coordinate;The camera preview region is to slide to terminate with predetermined manner connection the first slip origin coordinates, described first to sit Mark, the second slip origin coordinates and described second, which slide, terminates the region that coordinate is formed;The predetermined manner includes at least one of straight line, camber line, broken line.
- 9. a kind of terminal for realizing Digital Zoom, which is characterized in that including:Memory, processor and it is stored in the memory The program of realization Digital Zoom that is upper and can running on the processor, the program for realizing Digital Zoom is by the processing Each step of the method for the realization Digital Zoom as described in claim 1-5 is any is realized when device performs.
- 10. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage there are one or Multiple programs, one or more of programs can be performed by one or more processor, to realize that claim 1-5 such as appoints Each step of the method for realization Digital Zoom described in one.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711463885.9A CN108170350A (en) | 2017-12-28 | 2017-12-28 | Realize method, terminal and the computer readable storage medium of Digital Zoom |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711463885.9A CN108170350A (en) | 2017-12-28 | 2017-12-28 | Realize method, terminal and the computer readable storage medium of Digital Zoom |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108170350A true CN108170350A (en) | 2018-06-15 |
Family
ID=62519572
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711463885.9A Pending CN108170350A (en) | 2017-12-28 | 2017-12-28 | Realize method, terminal and the computer readable storage medium of Digital Zoom |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108170350A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108804014A (en) * | 2018-06-28 | 2018-11-13 | 苏州乐米信息科技股份有限公司 | A kind of Zoom method and system of three-dimensional view angle |
CN109816761A (en) * | 2018-12-25 | 2019-05-28 | 东软集团股份有限公司 | Figure conversion method, device, storage medium and electronic equipment |
CN109933264A (en) * | 2019-03-19 | 2019-06-25 | 深圳市元征科技股份有限公司 | Graph data display methods and device |
CN112637481A (en) * | 2020-11-25 | 2021-04-09 | 华为技术有限公司 | Image scaling method and device |
CN112764650A (en) * | 2021-01-29 | 2021-05-07 | 久瓴(江苏)数字智能科技有限公司 | Graph scaling method and device, electronic equipment and storage medium |
EP4113969A1 (en) * | 2021-07-01 | 2023-01-04 | Beijing Xiaomi Mobile Software Co., Ltd. | Zoom control method, apparatus, medium and program product |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102281399A (en) * | 2011-08-12 | 2011-12-14 | 广东步步高电子工业有限公司 | Digital photographic equipment with touch screen and zooming method of digital photographic equipment |
CN102637198A (en) * | 2012-02-28 | 2012-08-15 | 优视科技有限公司 | Realization method and device of webpage content display, browser and mobile terminal |
CN102831577A (en) * | 2012-08-29 | 2012-12-19 | 电子科技大学 | Method for fast zooming two-dimensional seismic image based on GPU (graphic processing unit) |
US20130120605A1 (en) * | 2010-03-03 | 2013-05-16 | Todor G. Georgiev | Methods, Apparatus, and Computer-Readable Storage Media for Blended Rendering of Focused Plenoptic Camera Data |
CN103412720A (en) * | 2013-06-28 | 2013-11-27 | 贵阳朗玛信息技术股份有限公司 | Method and device for processing touch-control input signals |
CN103793159A (en) * | 2012-10-30 | 2014-05-14 | 中兴通讯股份有限公司 | Mobile terminal and display area and display content arranging method thereof |
CN104715053A (en) * | 2012-02-28 | 2015-06-17 | 优视科技有限公司 | Method, device and browser for achieving web content display |
CN104822088A (en) * | 2015-04-16 | 2015-08-05 | 腾讯科技(北京)有限公司 | Video image zooming method and device |
-
2017
- 2017-12-28 CN CN201711463885.9A patent/CN108170350A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130120605A1 (en) * | 2010-03-03 | 2013-05-16 | Todor G. Georgiev | Methods, Apparatus, and Computer-Readable Storage Media for Blended Rendering of Focused Plenoptic Camera Data |
CN102281399A (en) * | 2011-08-12 | 2011-12-14 | 广东步步高电子工业有限公司 | Digital photographic equipment with touch screen and zooming method of digital photographic equipment |
CN102637198A (en) * | 2012-02-28 | 2012-08-15 | 优视科技有限公司 | Realization method and device of webpage content display, browser and mobile terminal |
CN104715053A (en) * | 2012-02-28 | 2015-06-17 | 优视科技有限公司 | Method, device and browser for achieving web content display |
CN102831577A (en) * | 2012-08-29 | 2012-12-19 | 电子科技大学 | Method for fast zooming two-dimensional seismic image based on GPU (graphic processing unit) |
CN103793159A (en) * | 2012-10-30 | 2014-05-14 | 中兴通讯股份有限公司 | Mobile terminal and display area and display content arranging method thereof |
CN103412720A (en) * | 2013-06-28 | 2013-11-27 | 贵阳朗玛信息技术股份有限公司 | Method and device for processing touch-control input signals |
CN104822088A (en) * | 2015-04-16 | 2015-08-05 | 腾讯科技(北京)有限公司 | Video image zooming method and device |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108804014A (en) * | 2018-06-28 | 2018-11-13 | 苏州乐米信息科技股份有限公司 | A kind of Zoom method and system of three-dimensional view angle |
CN109816761A (en) * | 2018-12-25 | 2019-05-28 | 东软集团股份有限公司 | Figure conversion method, device, storage medium and electronic equipment |
CN109816761B (en) * | 2018-12-25 | 2023-03-21 | 东软集团股份有限公司 | Graph conversion method, graph conversion device, storage medium and electronic equipment |
CN109933264A (en) * | 2019-03-19 | 2019-06-25 | 深圳市元征科技股份有限公司 | Graph data display methods and device |
CN109933264B (en) * | 2019-03-19 | 2022-01-04 | 深圳市元征科技股份有限公司 | Graphic data display method and device |
CN112637481A (en) * | 2020-11-25 | 2021-04-09 | 华为技术有限公司 | Image scaling method and device |
CN112637481B (en) * | 2020-11-25 | 2022-03-29 | 华为技术有限公司 | Image scaling method and device |
CN112764650A (en) * | 2021-01-29 | 2021-05-07 | 久瓴(江苏)数字智能科技有限公司 | Graph scaling method and device, electronic equipment and storage medium |
CN112764650B (en) * | 2021-01-29 | 2022-08-23 | 久瓴(江苏)数字智能科技有限公司 | Graph scaling method and device, electronic equipment and storage medium |
EP4113969A1 (en) * | 2021-07-01 | 2023-01-04 | Beijing Xiaomi Mobile Software Co., Ltd. | Zoom control method, apparatus, medium and program product |
US11736799B2 (en) | 2021-07-01 | 2023-08-22 | Beijing Xiaomi Mobile Software Co., Ltd. | Zoom control method, apparatus and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108037893A (en) | A kind of display control method of flexible screen, device and computer-readable recording medium | |
CN108170350A (en) | Realize method, terminal and the computer readable storage medium of Digital Zoom | |
CN108052302A (en) | Association display methods, terminal and the computer readable storage medium of double-sided screen | |
CN108268195A (en) | One-handed performance display methods, mobile terminal and computer readable storage medium | |
CN109213401A (en) | Double-sided screen application icon method for sorting, mobile terminal and readable storage medium storing program for executing | |
CN108182028A (en) | A kind of control method, terminal and computer readable storage medium | |
CN107329682A (en) | Edge exchange method and mobile terminal | |
CN110007816A (en) | A kind of display area determines method, terminal and computer readable storage medium | |
CN108108109A (en) | A kind of terminal operation method, mobile terminal and computer readable storage medium | |
CN107347115A (en) | Method, equipment and the computer-readable recording medium of information input | |
CN108196922A (en) | A kind of method, terminal and computer readable storage medium for opening application | |
CN107463324A (en) | A kind of image display method, mobile terminal and computer-readable recording medium | |
CN107908355A (en) | Touch control method, mobile terminal and the storage medium of mobile terminal | |
CN107844232A (en) | A kind of screen operator control method and mobile terminal, computer-readable recording medium | |
CN108172161A (en) | Display methods, mobile terminal and computer readable storage medium based on flexible screen | |
CN108055483A (en) | A kind of picture synthesis method, mobile terminal and computer readable storage medium | |
CN108920075A (en) | Dual-screen mobile terminal control method, mobile terminal and computer readable storage medium | |
CN110180181A (en) | Screenshot method, device and the computer readable storage medium of Wonderful time video | |
CN108170348A (en) | A kind of thumbnail method for previewing, equipment and computer readable storage medium | |
CN108196777A (en) | A kind of flexible screen application process, equipment and computer readable storage medium | |
CN107229410A (en) | Interactive operation method, mobile terminal and computer-readable recording medium | |
CN108845711A (en) | Screen touch method, terminal and computer readable storage medium | |
CN108572777A (en) | A kind of terminal object method for sorting, terminal and computer readable storage medium | |
CN108600325A (en) | A kind of determination method, server and the computer readable storage medium of push content | |
CN107566608A (en) | A kind of system air navigation aid, equipment and computer-readable recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180615 |
|
RJ01 | Rejection of invention patent application after publication |