CN107222737A - The processing method and mobile terminal of a kind of depth image data - Google Patents
The processing method and mobile terminal of a kind of depth image data Download PDFInfo
- Publication number
- CN107222737A CN107222737A CN201710617604.4A CN201710617604A CN107222737A CN 107222737 A CN107222737 A CN 107222737A CN 201710617604 A CN201710617604 A CN 201710617604A CN 107222737 A CN107222737 A CN 107222737A
- Authority
- CN
- China
- Prior art keywords
- distance
- destination object
- depth image
- lens
- digital zoom
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses a kind of processing method of depth image data and mobile terminal, its method includes:Obtain first distance of the destination object to real lens in original depth image;After the digital zoom operation of preview image size in detecting adjustment mobile terminal, destination object is obtained to the second distance of virtual lens;Wherein, virtual lens are the equivalent camera lens of real lens after digital zoom;According to second distance and the first distance, the displaying ratio of destination object in original depth image is adjusted;According to the depth image after adjustment, the target image after digital zoom is generated.The present invention is shot by using depth camera and gets original depth image, and perspective correction is carried out to the amplification effects of different far and near objects when carrying out digital zoom, thus to be more nearly physical distance close to and the perspective that produces changes imaging effect.
Description
Technical field
The present invention relates to intelligent mobile terminal field, more particularly to a kind of processing method of depth image data and movement are eventually
End.
Background technology
With the maturation of depth camera technology, the 3D such as particularly TOF (Time of flight, flight time telemetry) is clapped
The utilization of technology is taken the photograph, shooting result has not been simple two-dimension graphic data, but the 3-D graphic with 3D depth datas
Data.Current digital zoom technology be by realizing effect that simulated range furthers to the amplification of shooting picture data, but
The effect that simulated range furthers is realized by the amplification of simple picture, actual perspective change effect can not be really embodied
Really, the problem of 3 D graphic data display effect after digital zoom is poor is caused.
The content of the invention
The embodiments of the invention provide a kind of processing method of depth image data and mobile terminal, to solve prior art
The problem of 3 D graphic data display effect after middle digital zoom is poor.
In a first aspect, the embodiments of the invention provide a kind of processing method of depth image data, including:
Obtain first distance of the destination object to real lens in original depth image;
After the digital zoom operation of preview image size in detecting adjustment mobile terminal, destination object is obtained to void
Intend the second distance of camera lens;Wherein, virtual lens are the equivalent camera lens of real lens after digital zoom;
According to second distance and the first distance, the displaying ratio of destination object in original depth image is adjusted;
According to the depth image after adjustment, the target image after digital zoom is generated.
Second aspect, the embodiment of the present invention additionally provides a kind of mobile terminal, including:
First acquisition module, for obtaining first distance of the destination object to real lens in original depth image;
Second acquisition module, for detect adjustment mobile terminal in preview image size digital zoom operation it
Afterwards, destination object is obtained to the second distance of virtual lens;Wherein, virtual lens are the equivalent mirror of real lens after digital zoom
Head;
First adjusting module, for according to second distance and the first distance, showing to destination object in original depth image
Show that ratio is adjusted;
Generation module, for according to the depth image after adjustment, generating the target image after digital zoom.
The third aspect, the embodiment of the present invention additionally provides a kind of mobile terminal, and mobile terminal includes processor, memory,
The computer program that can be run on memory and on a processor is stored in, as above institute is realized during computing device computer program
The step of processing method for the depth image data stated.
Fourth aspect, the embodiment of the present invention additionally provides a kind of computer-readable recording medium, computer-readable storage medium
Be stored with computer program in matter, and the processing of depth image data as described above is realized when computer program is executed by processor
The step of method.
So, the mobile terminal of the embodiment of the present invention, shoots by using depth camera and gets original depth image,
Perspective correction is carried out to the amplification effect of different far and near objects when carrying out digital zoom, so as to be leaned on to physical distance is more nearly
Perspective that is near and producing changes imaging effect.
Brief description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention
The accompanying drawing needed to use is briefly described, it should be apparent that, drawings in the following description are only some implementations of the present invention
Example, for those of ordinary skill in the art, without having to pay creative labor, can also be according to these accompanying drawings
Obtain other accompanying drawings.
Fig. 1 represents the flow chart of the processing method of the depth image data of the embodiment of the present invention;
Fig. 2 represents the imaging schematic diagram before digital zoom in the embodiment of the present invention;
Fig. 3 represents the imaging schematic diagram after digital zoom in the embodiment of the present invention;
Fig. 4 represents the module diagram of the mobile terminal of the embodiment of the present invention;
Fig. 5 represents the mobile terminal block diagram one of the embodiment of the present invention;
Fig. 6 represents the mobile terminal block diagram two of the embodiment of the present invention.
Embodiment
The exemplary embodiment of the present invention is more fully described below with reference to accompanying drawings.Although showing the present invention in accompanying drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the present invention without should be by embodiments set forth here
Limited.Conversely it is able to be best understood from the present invention there is provided these embodiments, and can be by the scope of the present invention
Complete conveys to those skilled in the art.
As shown in figure 1, The embodiment provides a kind of processing method of depth image data, this method is specifically wrapped
Include:
Step 101:Obtain first distance of the destination object to real lens in original depth image.
Wherein, original depth image refers to the depth image that the depth camera of mobile terminal is collected after focusing, i.e.,
Real lens are to the defocused depth image collected.Depth image is also known as each pixel in range image, depth image
The depth distance (or being pixel value) of point represents to be taken in scene between the real lens of a certain object and mobile terminal
Distance.Wherein, destination object tool have the dimensions, therefore destination object to the first distance of real lens can be it is multiple away from
From;Further, it is also possible to which destination object is equivalent into a point, the first distance of destination object to real lens can be equivalent point
To the distance of real lens.
Step 102:After the digital zoom operation of preview image size in detecting adjustment mobile terminal, target is obtained
Second distance of the object to virtual lens.
Wherein, in practical application when shooting different far and near scenes, it is necessary to carry out zoom operation, but due to mobile terminal
The optical zoom of camera lens realize difficult, to realize the zoom operation of far and near scene, can be realized by digital zoom different remote
The focus effects of near field scape.After the digital zoom operation of preview image size in detecting adjustment mobile terminal, further obtain
Destination object is taken to the second distance of virtual lens.Wherein, virtual lens are the equivalent camera lens of real lens after digital zoom.Value
, it is noted that destination object tool have the dimensions, therefore destination object to the second distance of virtual lens can be multiple
Distance;Further, it is also possible to which destination object is equivalent into a point, the second distance of destination object to virtual lens can be equivalent
Distance of the point to virtual lens.
Step 103:According to second distance and the first distance, the displaying ratio of destination object in original depth image is carried out
Adjustment.
If the first distance is less than second distance, the displaying ratio of destination object in original depth image is tuned up;If the
One distance is more than second distance, then turns the displaying ratio of destination object in original depth image down.So, according to the first distance
And second distance, the displaying ratio of destination object in original depth image is adjusted, so that reality of the displaying ratio close to destination object
Displaying ratio in the vision of border, improves display effect.
Step 104:According to the depth image after adjustment, the target image after digital zoom is generated.
Because percentage regulation image is the image processing operations before image carries out preview, to obtain display effect close in fact
The preview image of border visual effect, in addition it is also necessary to the target image after digital zoom is generated according to the depth image after adjustment, here
Target image refer to preview image, also can be regarded as the image finally photographed.
Wherein, in order to simplify the processing complexity for obtaining destination object to the first distance of real lens, step 101 is specific
Comprise the following steps:Obtain the depth distance of at least one corresponding pixel of destination object in original depth image;According to each
The depth distance of individual pixel, it is determined that first distance of the correspondence destination object to real lens.Due to each in depth image
The depth distance (or for pixel value) of individual pixel represents to be taken the real lens of a certain object and mobile terminal in scene
The distance between, then obtain in original depth image the depth distance of the corresponding all pixels point of destination object or from all pictures
The depth distance for multiple pixels that vegetarian refreshments is extracted, you can get the corresponding destination object of these pixels to real lens
Between the first distance.So directly according to the depth distance of the pixel of each in depth image, you can get each pixel
The corresponding destination object of point arrives the distance between real lens.Wherein, the depth distance available pixel of each pixel here
Value represents, in addition what deserves to be explained is other can characterize each pixel uniqueness values can as pixel value, such as each
Distance value of the pixel to real lens.
This concludes the description of can be according to the distance of each pixel to real lens, it is determined that correspondence destination object is to actual mirror
First distance of head.Due to the multiple pixels of destination object correspondence, the corresponding distance of different pixels point is different, therefore, such as
What accurately obtains destination object to the distance of real lens, and respectively all pixels can be clicked through with the calculating and acquisition of row distance.Tool
Body, for reduction processing complexity, it may be accomplished by:Calculate the first average departure of the depth distance of each pixel
From value;By the corresponding distance of the first average distance value, it is defined as destination object to the first distance of real lens.That is, by target
The corresponding all pixels point of object is equivalent to a pixel, and can specifically calculate the depth distance of each pixel first is averaged
Distance value, according to the corresponding distance of the first average distance value, is defined as destination object to the first distance of real lens.
Further, after the digital zoom operation of preview image size in detecting adjustment mobile terminal, derive
Become defocused virtual lens, the virtual lens are the equivalent camera lens for becoming defocused correspondence camera lens, at this moment virtual lens and real lens
Between occur certain deviation, produce an offset distance.Step 102 is specifically included:According to the corresponding zoom of digital zoom operation
Multiple, determines the camera lens offset distance between virtual lens and real lens;According to the first distance and camera lens offset distance, it is determined that
Second distance of the destination object to virtual lens.Wherein, the corresponding zoom magnification of digital zoom operation is different, virtual lens and reality
Offset distance between the camera lens of border is different.
Specifically, when digital zoom operation furthers for focus, according to the first distance and camera lens offset distance, target is determined
Object to virtual lens second distance the step of be specially:By the first distance and the difference of camera lens offset distance, it is defined as mesh
Object is marked to the second distance of virtual lens.When digital zoom operation zooms out for focus, offset according to the first distance and camera lens
Distance, determine destination object to virtual lens second distance the step of be specially:By the first distance and camera lens offset distance
And value, it is defined as destination object to the second distance of virtual lens.
Further, step 103 includes:By the calculation formula of the first distance, second distance and displaying ratio, calculate each
Imaging size of the individual destination object after digital zoom;According to imaging size, the display to destination object in original depth image
Ratio is adjusted.Wherein, the calculation formula of displaying ratio includes the first distance and the equidistant parameter of second distance, for table
Levy the imaging size relation after digital zoom money.
Wherein, by the calculation formula of the first distance, second distance and displaying ratio, each destination object is calculated in numeral
The step of becoming defocused imaging size includes:
Pass through formulaCalculate imaging chi of each destination object after digital zoom
It is very little.
Wherein, H '1Represent imaging size of the destination object after digital zoom, H1Represent target image in original depth-map
Imaging size as in, S1Represent distance, S between real lens and destination object2Represent destination object image-forming range and S1Between
Range difference, S1+S2Represent the image-forming range of destination object, S '1Represent distance between virtual lens and destination object.
Specifically, by taking image height as an example, as shown in Fig. 2 height of the destination object in original depth image is G1, it is real
Border camera lens to the distance between destination object be d1, the image-forming range and d of destination object1The distance between difference be d2, or claim mesh
Mark object to imaging background between depth distance be d2.As shown in figure 3, image height of the destination object after digital zoom is
G‘1, the distance between virtual lens after digital zoom to destination object are d '1, image-forming range and the d ' of destination object1Between
Range difference it is constant, be still d2, destination object is constant to the depth distance between imaging background before and after digital zoom in other words.According to
According to above-mentioned formula, G is being got1、d1、d2、d‘1Afterwards, according toIt can calculate and obtain G '1。
As can be seen that as simulation is closing the distance d ' from above-mentioned formula1Diminish, image height G '1Become big.With imaging
Exemplified by the height of size, the size in the width of imaging size or other directions is obtained with that can be calculated using similar algorithm, so that
To the scaled size compared with closing to reality visual effect.
Further, in order to further optimize the display effect of image, the place of the depth image data of the embodiment of the present invention
In reason method, the depth image after according to adjustment, generation digital zoom after target image before, in addition to:According to adjustment
The displaying ratio in other regions in destination object afterwards, adjustment original depth image in addition to destination object.It can specifically pass through
Following steps are realized:Obtain in original depth image between other regions in addition to destination object and real lens the 3rd away from
From;Detect whether the difference between the 3rd distance and the first distance is less than predetermined threshold value;If being less than, according to correspondence destination object
Imaging size the displaying ratio in other regions in original depth image is adjusted.
That is, after the scaling of the destination object of each in determining view area, to ensure whole image
Display effect closer to actual visual effect, the embodiment of the present invention also further to destination object outside background area show
The adjustment of ratio, the 3rd distance described above and the difference of the first distance can be that on the occasion of can also be negative value, but its be absolute
Value can use the adjustment apart from the scaling progress displaying ratio of corresponding destination object with first when being less than predetermined threshold value.
In the processing method of the depth image data of the embodiment of the present invention, mobile terminal shoots by using depth camera and obtained
Original depth image is got, perspective correction is carried out to the amplification effect of different far and near objects when carrying out digital zoom, so that
To be more nearly physical distance close to and produce perspective change imaging effect.
Above example describes the processing method of the depth image data under different scenes in detail respectively, below in conjunction with
Fig. 4 is described further to corresponding mobile terminal.
As shown in figure 4, the mobile terminal 400 of the embodiment of the present invention, can realize and original depth-map is obtained in above-described embodiment
Destination object is to the first distance of real lens as in, the digital zoom of preview image size in adjustment mobile terminal is detected
After operation, destination object is obtained to the second distance of virtual lens, according to second distance and the first distance, to original depth-map
The displaying ratio of destination object is adjusted as in, according to the depth image after adjustment, generates the target image after digital zoom
The details of method, and identical effect is reached, the mobile terminal 400 specifically includes following functions module:
First acquisition module 410, for obtaining first distance of the destination object to real lens in original depth image;
Second acquisition module 420, the digital zoom operation for the preview image size in adjustment mobile terminal is detected
Afterwards, destination object is obtained to the second distance of virtual lens;Wherein, virtual lens be digital zoom after real lens it is equivalent
Camera lens;
First adjusting module 430, for according to second distance and the first distance, to destination object in original depth image
Displaying ratio is adjusted;
Generation module 440, for according to the depth image after adjustment, generating the target image after digital zoom.
Wherein, the first acquisition module 410 includes:
First acquisition submodule, the depth for obtaining at least one corresponding pixel of destination object in original depth image
Spend distance;
First processing submodule, for the depth distance according to each pixel, it is determined that corresponding to destination object to actual mirror
First distance of head.
Wherein, the first processing submodule includes:
First computing unit, the first average distance value of the depth distance for calculating each pixel;
First processing units, for by the corresponding distance of the first average distance value, being defined as destination object to real lens
The first distance.
Wherein, the second acquisition module 420 includes:
Second acquisition submodule, for according to the corresponding zoom magnification of digital zoom operation, determining virtual lens and reality
Camera lens offset distance between camera lens;
Second processing submodule, for according to the first distance and camera lens offset distance, determining destination object to virtual lens
Second distance.
Wherein, second processing submodule includes:
Second processing unit, for by the first distance and the difference of camera lens offset distance, being defined as destination object to virtually
The second distance of camera lens.
Wherein, the first adjusting module 430 includes:
Calculating sub module, for the calculation formula by the first distance, second distance and displaying ratio, calculates each target
Imaging size of the object after digital zoom;
Submodule is adjusted, for according to imaging size, being adjusted to the displaying ratio of destination object in original depth image
It is whole.
Wherein, calculating sub module includes:
Second computing unit, for passing through formulaEach destination object is calculated in number
Word becomes defocused imaging size;
Wherein, H '1Represent imaging size of the destination object after digital zoom, H1Represent target image in original depth-map
Imaging size as in, S1Represent distance, S between real lens and destination object2Represent destination object image-forming range and S1Between
Range difference, S1+S2Represent the image-forming range of destination object, S '1Represent distance between virtual lens and destination object.
Wherein, mobile terminal also includes:
3rd acquisition module, for obtain other regions in original depth image in addition to destination object and real lens it
Between the 3rd distance;
Whether detection module, the difference for detecting between the 3rd distance and the first distance is less than predetermined threshold value;
Second adjusting module, for when the difference between the 3rd distance and the first distance is less than predetermined threshold value, according to right
The imaging size of destination object is answered to be adjusted the displaying ratio in other regions in original depth image.
It is worthy of note that, the mobile terminal of the embodiment of the present invention is corresponding with the processing method of above-mentioned depth image data
Mobile terminal, the embodiment of the above method and the technique effect realized are suitable for the embodiment of the mobile terminal.Its
In, the mobile terminal shoots by using depth camera and gets original depth image, remote to difference when carrying out digital zoom
The amplification effect of near object carries out perspective correction, thus to be more nearly physical distance close to and the perspective that produces is changing into picture
Effect.
In order to be better achieved above-mentioned purpose, the embodiment of the present invention additionally provides a kind of mobile terminal, including processor, deposits
Reservoir and the computer program that can be run on memory and on a processor is stored in, it is real during computing device computer program
The existing step in the processing method of depth image data as described above.The embodiment of the present invention additionally provides a kind of computer-readable
Be stored with computer program on storage medium, computer-readable recording medium, is realized such as when computer program is executed by processor
The step of processing method of upper described depth image data.
Specifically, Fig. 5 is the block diagram of the mobile terminal 500 of another embodiment of the present invention, mobile terminal as shown in Figure 5
Including:At least one processor 501, memory 502, network interface 503 and user interface 504.Each in mobile terminal 500
Component is coupled by bus system 505.It is understood that bus system 505 is used to realize that the connection between these components to be led to
Letter.Bus system 505 is in addition to including data/address bus, in addition to power bus, controlling bus and status signal bus in addition.But be
For the sake of clear explanation, in Figure 5 various buses are all designated as bus system 505.
Or, each component of the above can also partly or entirely pass through field programmable gate array (Field
Programmable Gate Array, abbreviation FPGA) form be embedded on some chip of the terminal to realize.And it
Can be implemented separately, can also integrate.
Wherein, user interface 504 is respectively used to the interface circuit for connecting ancillary equipment or being connected with ancillary equipment.It can wrap
Include the interface of the equipment such as display, keyboard or pointing device, such as mouse, trace ball (trackball), touch-sensitive plate or tactile
Touch the interface of the equipment such as screen.
It is appreciated that processor 501, can be general processor, such as CPU can also be and be configured to more than implementation
One or more integrated circuits of method, for example:One or more specific integrated circuit (Application Specific
Integrated Circuit, abbreviation ASIC), or, one or more microprocessors (digital signal processor,
Abbreviation DSP), or, one or more field programmable gate array (Field Programmable Gate Array, abbreviation
FPGA) etc..Memory element can be the general designation of a storage device or multiple memory elements.
Memory 502 in the embodiment of the present invention can be volatile memory or nonvolatile memory, or may include
Both volatibility and nonvolatile memory.Wherein, nonvolatile memory can be read-only storage (Read-Only
Memory, ROM), programmable read only memory (Programmable ROM, PROM), Erasable Programmable Read Only Memory EPROM
(Erasable PROM, EPROM), Electrically Erasable Read Only Memory (Electrically EPROM, EEPROM) dodge
Deposit.Volatile memory can be random access memory (Random Access Memory, RAM), and it is used as outside high speed
Caching.By exemplary but be not restricted explanation, the RAM of many forms can use, such as static RAM
(Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), Synchronous Dynamic Random Access Memory
(Synchronous DRAM, SDRAM), double data speed synchronous dynamic RAM (Double Data Rate
SDRAM, DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronized links
Dynamic random access memory (Synchlink DRAM, SLDRAM) and direct rambus random access memory (Direct
Rambus RAM, DRRAM).The memory 502 of system and method described herein be intended to including but not limited to these and it is any its
It is adapted to the memory of type.
In some embodiments, memory 502 stores following element, can perform module or data structure, or
Their subset of person, or their superset:Operating system 5021 and application program 5022.
Wherein, operating system 5021, comprising various system programs, such as ccf layer, core library layer, driving layer, are used for
Realize various basic businesses and handle hardware based task.Application program 5022, includes various application programs, such as media
Player (Media Player), browser (Browser) etc., for realizing various applied business.Realize the embodiment of the present invention
The program of method may be embodied in application program 5022.
In an embodiment of the present invention, mobile terminal 500 also includes:It is stored on memory 502 and can be in processor 501
The computer program of upper operation, can be the computer program in application program 5022, computer program is by processor specifically
501 realize following steps when performing:Obtain first distance of the destination object to real lens in original depth image;Detecting
Adjust mobile terminal in preview image size digital zoom operation after, obtain destination object to virtual lens second away from
From;Wherein, virtual lens are the equivalent camera lens of real lens after digital zoom;According to second distance and the first distance, to original
The displaying ratio of destination object is adjusted in depth image;According to the depth image after adjustment, the mesh after digital zoom is generated
Logo image.
The method that the embodiments of the present invention are disclosed can apply in processor 501, or be realized by processor 501.
Processor 501 is probably a kind of IC chip, the disposal ability with signal.In implementation process, the above method it is each
Step can be completed by the integrated logic circuit of the hardware in processor 501 or the instruction of software form.Above-mentioned processing
Device 501 can be general processor, digital signal processor (Digital Signal Processor, DSP), special integrated electricity
Road (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field
Programmable Gate Array, FPGA) or other PLDs, discrete gate or transistor logic,
Discrete hardware components.It can realize or perform disclosed each method, step and the logic diagram in the embodiment of the present invention.It is general
Processor can be microprocessor or the processor can also be any conventional processor etc..With reference to institute of the embodiment of the present invention
The step of disclosed method, can be embodied directly in hardware decoding processor and perform completion, or with the hardware in decoding processor
And software module combination performs completion.Software module can be located at random access memory, and flash memory, read-only storage may be programmed read-only
In the ripe storage medium in this area such as memory or electrically erasable programmable memory, register.The storage medium is located at
Memory 502, processor 501 reads the information in memory 502, the step of completing the above method with reference to its hardware.
It is understood that embodiments described herein can with hardware, software, firmware, middleware, microcode or its
Combine to realize.Realized for hardware, processing unit can be realized in one or more application specific integrated circuit (Application
Specific Integrated Circuits, ASIC), digital signal processor (Digital Signal Processing,
DSP), digital signal processing appts (DSP Device, DSPD), programmable logic device (Programmable Logic
Device, PLD), field programmable gate array (Field-Programmable Gate Array, FPGA), general processor,
In controller, microcontroller, microprocessor, other electronic units for performing the application function or its combination.
Realized for software, this paper skill can be realized by performing the module (such as process, function) of this paper functions
Art.Software code is storable in memory and by computing device.Memory can be within a processor or outside processor
Realize in portion.
Specifically, following steps can be also realized when computer program is performed by processor 501:Obtain in original depth image
The depth distance of at least one corresponding pixel of destination object;According to the depth distance of each pixel, it is determined that correspondence target
First distance of the object to real lens.
Specifically, following steps can be also realized when computer program is performed by processor 501:Calculate the depth of each pixel
Spend the first average distance value of distance;By the corresponding distance of the first average distance value, it is defined as destination object and arrives real lens
First distance.
Specifically, following steps can be also realized when computer program is performed by processor 501:Acquisition is grasped according to digital zoom
Make corresponding zoom magnification, determine the camera lens offset distance between virtual lens and real lens;According to the first distance and camera lens
Offset distance, determines destination object to the second distance of virtual lens.
Further, following steps can be also realized when computer program is performed by processor 501:By the first distance and camera lens
The difference of offset distance, is defined as destination object to the second distance of virtual lens.
Wherein, following steps can be also realized when computer program is performed by processor 501:By first distance, second away from
From the calculation formula with displaying ratio, imaging size of each destination object after digital zoom is calculated;It is right according to imaging size
The displaying ratio of destination object is adjusted in original depth image.
Wherein, following steps can be also realized when computer program is performed by processor 501:Pass through formulaCalculate imaging size of each destination object after digital zoom;
Wherein, H '1Represent imaging size of the destination object after digital zoom, H1Represent target image in original depth-map
Imaging size as in, S1Represent distance, S between real lens and destination object2Represent destination object image-forming range and S1Between
Range difference, S1+S2Represent the image-forming range of destination object, S '1Represent distance between virtual lens and destination object.
Wherein, following steps can be also realized when computer program is performed by processor 501:Obtain and removed in original depth image
The 3rd distance between other regions and real lens outside destination object;Detect the difference between the 3rd distance and the first distance
Whether predetermined threshold value is less than;If being less than, according to the imaging size of correspondence destination object to other areas in original depth image
The displaying ratio in domain is adjusted.
The mobile terminal 500 of the embodiment of the present invention, shoots by using depth camera and gets original depth image, entering
Perspective correction is carried out to the amplification effect of different far and near objects during row digital zoom, so that close to physical distance is more nearly
And the perspective change imaging effect produced.
Fig. 6 is the structural representation of the mobile terminal of another embodiment of the present invention.Specifically, the mobile terminal in Fig. 6
600 can be mobile phone, tablet personal computer, personal digital assistant (Personal Digital Assistant, PDA) or vehicle mounted electric
Brain etc..
Mobile terminal 600 in Fig. 6 includes power supply 610, memory 620, input block 630, display unit 640, taken pictures
Component 650, processor 660, WIFI (Wireless Fidelity) module 670, voicefrequency circuit 680 and RF circuits 690, wherein,
Component 650 of taking pictures is depth camera.
Wherein, input block 630 can be used for the information for receiving user's input, and produce the user with mobile terminal 600
Set and the relevant signal of function control is inputted.Specifically, in the embodiment of the present invention, the input block 630 can include touching
Control panel 631.Contact panel 631, also referred to as touch-screen, collect touch operation (such as user of the user on or near it
Use the operation of any suitable object such as finger, stylus or annex on contact panel 631), and according to journey set in advance
Formula drives corresponding attachment means.Optionally, contact panel 631 may include two portions of touch detecting apparatus and touch controller
Point.Wherein, touch detecting apparatus detects the touch orientation of user, and detects the signal that touch operation is brought, and transmits a signal to
Touch controller;Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives
The processor 660, and the order sent of reception processing device 660 and can be performed.Furthermore, it is possible to using resistance-type, condenser type,
The polytype such as infrared ray and surface acoustic wave realizes contact panel 631.Except contact panel 631, input block 630 can be with
Including other input equipments 632, other input equipments 632 can include but is not limited to physical keyboard, function key (such as volume control
Button processed, switch key etc.), trace ball, mouse, the one or more in action bars etc..
Wherein, display unit 640 can be used for information and the movement for showing the information inputted by user or being supplied to user
The various menu interfaces of terminal.Display unit 640 may include display panel 641, optionally, can use LCD or organic light emission
The forms such as diode (Organic Light-Emitting Diode, OLED) configure display panel 641.
It should be noted that contact panel 631 can cover display panel 641, touch display screen is formed, when touch display screen inspection
Measure after the touch operation on or near it, processor 660 is sent to determine the type of touch event, with preprocessor
660 provide corresponding visual output according to the type of touch event in touch display screen.
Touch display screen includes Application Program Interface viewing area and conventional control viewing area.The Application Program Interface viewing area
And arrangement mode of the conventional control viewing area is not limited, can be arranged above and below, left-right situs etc. can distinguish two and show
Show the arrangement mode in area.The Application Program Interface viewing area is displayed for the interface of application program.Each interface can be with
The interface element such as the icon comprising at least one application program and/or widget desktop controls.The Application Program Interface viewing area
It can also be the empty interface not comprising any content.The conventional control viewing area is used to show the higher control of utilization rate, for example,
Application icons such as settings button, interface numbering, scroll bar, phone directory icon etc..
Wherein processor 660 is the control centre of mobile terminal, utilizes each of various interfaces and connection whole mobile phone
Individual part, by operation or performs and is stored in software program and/or module in first memory 621, and calls and be stored in
Data in second memory 622, perform the various functions and processing data of mobile terminal, so as to be carried out to mobile terminal overall
Monitoring.Optionally, processor 660 may include one or more processing units.
In embodiments of the present invention, by call store the first memory 621 in software program and/or module and/
To the data in second memory 622, specifically for performing following step:Destination object is to actually in acquisition original depth image
First distance of camera lens;After the digital zoom operation of preview image size in detecting adjustment mobile terminal, target is obtained
Second distance of the object to virtual lens;Wherein, virtual lens are the equivalent camera lens of real lens after digital zoom;According to second
Distance and the first distance, are adjusted to the displaying ratio of destination object in original depth image;According to the depth map after adjustment
Target image after picture, generation digital zoom.
Specifically, processor 660 is additionally operable to:Obtain at least one corresponding pixel of destination object in original depth image
To the distance of real lens;According to the distance of each pixel to real lens, it is determined that correspondence destination object arrives real lens
First distance.
Specifically, processor 660 is additionally operable to:Each pixel is calculated to the first average distance of the distance of real lens
Value;By the corresponding distance of the first average distance value, it is defined as destination object to the first distance of real lens.
Further, processor 660 is additionally operable to:Obtain at least one corresponding pixel of destination object and arrive virtual lens
Distance;According to the distance of each pixel to virtual lens, it is determined that second distance of the correspondence destination object to virtual lens.
Wherein, processor 660 is additionally operable to:Each pixel is calculated to the second average distance value of the distance of virtual lens;
By the corresponding distance of the second average distance value, it is defined as destination object to the second distance of virtual lens.
Wherein, processor 660 is additionally operable to:By the calculation formula of the first distance, second distance and displaying ratio, calculate each
Imaging size of the individual destination object after digital zoom;According to imaging size, the display to destination object in original depth image
Ratio is adjusted.
Wherein, processor 660 is additionally operable to:Pass through formulaCalculate each destination object
Imaging size after digital zoom;
Wherein, H '1Represent imaging size of the destination object after digital zoom, H1Represent target image in original depth-map
Imaging size as in, S1Represent distance, S between real lens and destination object2Represent destination object image-forming range and S1Between
Range difference, S1+S2Represent the image-forming range of destination object, S '1Represent distance between virtual lens and destination object.
Wherein, processor 660 is additionally operable to:Obtain other regions in original depth image in addition to destination object and actual mirror
The 3rd distance between head;Detect whether the difference between the 3rd distance and the first distance is less than predetermined threshold value;If being less than, root
The displaying ratio in other regions in original depth image is adjusted according to the imaging size of correspondence destination object.
The mobile terminal 600 of the embodiment of the present invention, shoots by using depth camera and gets original depth image, entering
Perspective correction is carried out to the amplification effect of different far and near objects during row digital zoom, so that close to physical distance is more nearly
And the perspective change imaging effect produced.
Those of ordinary skill in the art are it is to be appreciated that the list of each example described with reference to the embodiments described herein
Member and algorithm steps, can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
Performed with hardware or software mode, depending on the application-specific and design constraint of technical scheme.Professional and technical personnel
Described function can be realized using distinct methods to each specific application, but this realization is it is not considered that exceed
The scope of the present invention.
It is apparent to those skilled in the art that, for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, may be referred to the corresponding process in preceding method embodiment, will not be repeated here.
In embodiment provided herein, it should be understood that disclosed apparatus and method, others can be passed through
Mode is realized.For example, device embodiment described above is only schematical, for example, the division of the unit, is only
A kind of division of logic function, can there is other dividing mode when actually realizing, such as multiple units or component can combine or
Person is desirably integrated into another system, or some features can be ignored, or does not perform.Another, shown or discussed is mutual
Between coupling or direct-coupling or communication connection can be the INDIRECT COUPLING or communication link of device or unit by some interfaces
Connect, can be electrical, machinery or other forms.
The unit illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit
The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs
's.
In addition, each functional unit in each embodiment of the invention can be integrated in a processing unit, can also
That unit is individually physically present, can also two or more units it is integrated in a unit.
If the function is realized using in the form of SFU software functional unit and is used as independent production marketing or in use, can be with
It is stored in a computer read/write memory medium.Understood based on such, technical scheme is substantially in other words
The part contributed to prior art or the part of the technical scheme can be embodied in the form of software product, the meter
Calculation machine software product is stored in a storage medium, including some instructions are to cause a computer equipment (can be individual
People's computer, server, or network equipment etc.) perform all or part of step of each of the invention embodiment methods described.
And foregoing storage medium includes:USB flash disk, mobile hard disk, ROM, RAM, magnetic disc or CD etc. are various can be with store program codes
Medium.
Above-described is the preferred embodiment of the present invention, it should be pointed out that come for the ordinary person of the art
Say, some improvements and modifications can also be made under the premise of principle of the present invention is not departed from, and these improvements and modifications also exist
In protection scope of the present invention.
Claims (18)
1. a kind of processing method of depth image data, it is characterised in that including:
Obtain first distance of the destination object to real lens in original depth image;
After the digital zoom operation of preview image size in detecting adjustment mobile terminal, destination object is obtained to virtual mirror
The second distance of head;Wherein, the virtual lens are the equivalent camera lens of the real lens after digital zoom;
According to second distance and the first distance, the displaying ratio of destination object in original depth image is adjusted;
According to the depth image after adjustment, the target image after digital zoom is generated.
2. the processing method of depth image data according to claim 1, it is characterised in that the acquisition original depth-map
As in destination object to real lens first apart from the step of, including:
Obtain the depth distance of at least one corresponding pixel of destination object in original depth image;
According to the depth distance of each pixel, it is determined that first distance of the correspondence destination object to real lens.
3. the processing method of depth image data according to claim 2, it is characterised in that described according to each pixel
Depth distance, it is determined that correspondence destination object to real lens first apart from the step of, including:
Calculate the first average distance value of the depth distance of each pixel;
By the corresponding distance of the first average distance value, it is defined as destination object to the first distance of real lens.
4. the processing method of depth image data according to claim 1, it is characterised in that described to detect adjustment shifting
In dynamic terminal after the digital zoom operation of preview image size, destination object is obtained to the step of the second distance of virtual lens
Suddenly, including:
According to the corresponding zoom magnification of digital zoom operation, the camera lens offset distance between virtual lens and real lens is determined;
According to first distance and the camera lens offset distance, determine destination object to the second distance of virtual lens.
5. the processing method of depth image data according to claim 4, it is characterised in that it is described according to described first away from
From with the camera lens offset distance, determine destination object to virtual lens second distance the step of, including:
By first distance and the difference of the camera lens offset distance, be defined as destination object to virtual lens second away from
From.
6. the processing method of depth image data according to claim 1, it is characterised in that it is described according to second distance and
First distance, the step of being adjusted to the displaying ratio of destination object in original depth image, including:
By the calculation formula of the first distance, second distance and displaying ratio, each destination object is calculated after digital zoom
Imaging size;
According to the imaging size, the displaying ratio of destination object in original depth image is adjusted.
7. the processing method of depth image data according to claim 6, it is characterised in that it is described by the first distance,
The calculation formula of second distance and displaying ratio, the step of calculating imaging size of each destination object after digital zoom, bag
Include:
Pass through formulaCalculate imaging size of each destination object after digital zoom;
Wherein, H '1Represent imaging size of the destination object after digital zoom, H1Represent target image in original depth image
Imaging size, S1Represent distance, S between real lens and destination object2Represent destination object image-forming range and S1Between away from
Deviation, S1+S2Represent the image-forming range of destination object, S '1Represent distance between virtual lens and destination object.
8. the processing method of depth image data according to claim 6, it is characterised in that the depth according to after adjustment
Before the step of spending the target image after image, generation digital zoom, in addition to:
Obtain the 3rd distance between other regions and the real lens in original depth image in addition to destination object;
Detect whether the difference between the 3rd distance and the first distance is less than predetermined threshold value;
If being less than, the displaying ratio in other regions in original depth image is entered according to the imaging size of correspondence destination object
Row adjustment.
9. a kind of mobile terminal, it is characterised in that including:
First acquisition module, for obtaining first distance of the destination object to real lens in original depth image;
Second acquisition module, for after the digital zoom operation of preview image size in detecting adjustment mobile terminal, obtaining
Destination object is taken to the second distance of virtual lens;Wherein, the virtual lens be digital zoom after the real lens etc.
Imitate camera lens;
First adjusting module, for according to second distance and the first distance, to the display ratio of destination object in original depth image
Example is adjusted;
Generation module, for according to the depth image after adjustment, generating the target image after digital zoom.
10. mobile terminal according to claim 9, it is characterised in that first acquisition module includes:
First acquisition submodule, for obtain the depth of at least one corresponding pixel of destination object in original depth image away from
From;
First processing submodule, for the depth distance according to each pixel, it is determined that correspondence destination object arrives real lens
First distance.
11. mobile terminal according to claim 10, it is characterised in that the first processing submodule includes:
First computing unit, the first average distance value of the depth distance for calculating each pixel;
First processing units, for by the corresponding distance of the first average distance value, being defined as destination object to real lens
The first distance.
12. mobile terminal according to claim 9, it is characterised in that second acquisition module includes:
Second acquisition submodule, for according to the corresponding zoom magnification of digital zoom operation, determining virtual lens and real lens
Between camera lens offset distance;
Second processing submodule, for according to first distance and the camera lens offset distance, determining destination object to virtually
The second distance of camera lens.
13. mobile terminal according to claim 12, it is characterised in that the second processing submodule includes:
Second processing unit, for by first distance and the difference of the camera lens offset distance, being defined as destination object and arriving
The second distance of virtual lens.
14. mobile terminal according to claim 9, it is characterised in that first adjusting module includes:
Calculating sub module, for the calculation formula by the first distance, second distance and displaying ratio, calculates each destination object
Imaging size after digital zoom;
Submodule is adjusted, for according to the imaging size, being adjusted to the displaying ratio of destination object in original depth image
It is whole.
15. mobile terminal according to claim 14, it is characterised in that the calculating sub module includes:
Second computing unit, for passing through formulaEach destination object is calculated in numeral to become
Defocused imaging size;
Wherein, H '1Represent imaging size of the destination object after digital zoom, H1Represent target image in original depth image
Imaging size, S1Represent distance, S between real lens and destination object2Represent destination object image-forming range and S1Between away from
Deviation, S1+S2Represent the image-forming range of destination object, S '1Represent distance between virtual lens and destination object.
16. mobile terminal according to claim 14, it is characterised in that the mobile terminal also includes:
3rd acquisition module, for obtaining between other regions and the real lens in original depth image in addition to destination object
3rd distance;
Whether detection module, the difference for detecting between the 3rd distance and the first distance is less than predetermined threshold value;
Second adjusting module, for when the difference between the 3rd distance and the first distance is less than predetermined threshold value, according to correspondence mesh
The imaging size of mark object is adjusted to the displaying ratio in other regions in original depth image.
17. a kind of mobile terminal, it is characterised in that the mobile terminal includes processor, memory is stored in the memory
Computer program that is upper and can running on the processor, realizes such as right during computer program described in the computing device
It is required that the step of processing method of depth image data described in 1 to 8 any one.
18. a kind of computer-readable recording medium, it is characterised in that be stored with computer on the computer-readable recording medium
Program, the depth image data as described in any one of claim 1 to 8 is realized when the computer program is executed by processor
The step of processing method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710617604.4A CN107222737B (en) | 2017-07-26 | 2017-07-26 | A kind of processing method and mobile terminal of depth image data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710617604.4A CN107222737B (en) | 2017-07-26 | 2017-07-26 | A kind of processing method and mobile terminal of depth image data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107222737A true CN107222737A (en) | 2017-09-29 |
CN107222737B CN107222737B (en) | 2019-05-17 |
Family
ID=59954418
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710617604.4A Active CN107222737B (en) | 2017-07-26 | 2017-07-26 | A kind of processing method and mobile terminal of depth image data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107222737B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109190469A (en) * | 2018-07-27 | 2019-01-11 | 阿里巴巴集团控股有限公司 | A kind of detection method and device, a kind of calculating equipment and storage medium |
CN110827219A (en) * | 2019-10-31 | 2020-02-21 | 北京小米智能科技有限公司 | Training method, device and medium of image processing model |
CN111739097A (en) * | 2020-06-30 | 2020-10-02 | 上海商汤智能科技有限公司 | Distance measuring method and device, electronic equipment and storage medium |
CN112532839A (en) * | 2020-11-25 | 2021-03-19 | 深圳市锐尔觅移动通信有限公司 | Camera module, imaging method, imaging device and mobile equipment |
CN114339018A (en) * | 2020-09-30 | 2022-04-12 | 北京小米移动软件有限公司 | Lens switching method and device and storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102333228A (en) * | 2010-06-14 | 2012-01-25 | Lg电子株式会社 | Electronic device and method of controlling electronic device |
JP2012160864A (en) * | 2011-01-31 | 2012-08-23 | Sanyo Electric Co Ltd | Imaging apparatus |
US20130063571A1 (en) * | 2011-09-12 | 2013-03-14 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
CN103824303A (en) * | 2014-03-14 | 2014-05-28 | 格科微电子(上海)有限公司 | Image perspective distortion adjusting method and device based on position and direction of photographed object |
US20150003819A1 (en) * | 2013-06-28 | 2015-01-01 | Nathan Ackerman | Camera auto-focus based on eye gaze |
CN105451012A (en) * | 2015-11-18 | 2016-03-30 | 湖南拓视觉信息技术有限公司 | Three-dimensional imaging system and three-dimensional imaging method |
CN105657237A (en) * | 2014-11-13 | 2016-06-08 | 聚晶半导体股份有限公司 | Image acquisition device and digital zooming method thereof |
CN105847660A (en) * | 2015-06-01 | 2016-08-10 | 维沃移动通信有限公司 | Dynamic zoom method, device and intelligent device |
CN205883405U (en) * | 2016-07-29 | 2017-01-11 | 深圳众思科技有限公司 | Automatic chase after burnt device and terminal |
CN106791375A (en) * | 2016-11-29 | 2017-05-31 | 维沃移动通信有限公司 | One kind shoots focusing method and mobile terminal |
CN106973222A (en) * | 2017-02-28 | 2017-07-21 | 维沃移动通信有限公司 | The control method and mobile terminal of a kind of Digital Zoom |
-
2017
- 2017-07-26 CN CN201710617604.4A patent/CN107222737B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102333228A (en) * | 2010-06-14 | 2012-01-25 | Lg电子株式会社 | Electronic device and method of controlling electronic device |
JP2012160864A (en) * | 2011-01-31 | 2012-08-23 | Sanyo Electric Co Ltd | Imaging apparatus |
US20130063571A1 (en) * | 2011-09-12 | 2013-03-14 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
US20150003819A1 (en) * | 2013-06-28 | 2015-01-01 | Nathan Ackerman | Camera auto-focus based on eye gaze |
CN103824303A (en) * | 2014-03-14 | 2014-05-28 | 格科微电子(上海)有限公司 | Image perspective distortion adjusting method and device based on position and direction of photographed object |
CN105657237A (en) * | 2014-11-13 | 2016-06-08 | 聚晶半导体股份有限公司 | Image acquisition device and digital zooming method thereof |
CN105847660A (en) * | 2015-06-01 | 2016-08-10 | 维沃移动通信有限公司 | Dynamic zoom method, device and intelligent device |
CN105451012A (en) * | 2015-11-18 | 2016-03-30 | 湖南拓视觉信息技术有限公司 | Three-dimensional imaging system and three-dimensional imaging method |
CN205883405U (en) * | 2016-07-29 | 2017-01-11 | 深圳众思科技有限公司 | Automatic chase after burnt device and terminal |
CN106791375A (en) * | 2016-11-29 | 2017-05-31 | 维沃移动通信有限公司 | One kind shoots focusing method and mobile terminal |
CN106973222A (en) * | 2017-02-28 | 2017-07-21 | 维沃移动通信有限公司 | The control method and mobile terminal of a kind of Digital Zoom |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109190469A (en) * | 2018-07-27 | 2019-01-11 | 阿里巴巴集团控股有限公司 | A kind of detection method and device, a kind of calculating equipment and storage medium |
CN109190469B (en) * | 2018-07-27 | 2020-06-23 | 阿里巴巴集团控股有限公司 | Detection method and device, computing equipment and storage medium |
CN110827219A (en) * | 2019-10-31 | 2020-02-21 | 北京小米智能科技有限公司 | Training method, device and medium of image processing model |
CN111739097A (en) * | 2020-06-30 | 2020-10-02 | 上海商汤智能科技有限公司 | Distance measuring method and device, electronic equipment and storage medium |
CN114339018A (en) * | 2020-09-30 | 2022-04-12 | 北京小米移动软件有限公司 | Lens switching method and device and storage medium |
CN114339018B (en) * | 2020-09-30 | 2023-08-22 | 北京小米移动软件有限公司 | Method and device for switching lenses and storage medium |
CN112532839A (en) * | 2020-11-25 | 2021-03-19 | 深圳市锐尔觅移动通信有限公司 | Camera module, imaging method, imaging device and mobile equipment |
Also Published As
Publication number | Publication date |
---|---|
CN107222737B (en) | 2019-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107222737A (en) | The processing method and mobile terminal of a kind of depth image data | |
CN105827952B (en) | A kind of photographic method and mobile terminal removing specified object | |
CN107507239B (en) | A kind of image partition method and mobile terminal | |
CN106027900A (en) | Photographing method and mobile terminal | |
CN106791393B (en) | A kind of image pickup method and mobile terminal | |
CN106791364A (en) | Method and mobile terminal that a kind of many people take pictures | |
CN108245888A (en) | Virtual object control method, device and computer equipment | |
CN108234891A (en) | A kind of photographic method and mobile terminal | |
CN107632895A (en) | A kind of information sharing method and mobile terminal | |
CN105847674A (en) | Preview image processing method based on mobile terminal, and mobile terminal therein | |
CN107317993A (en) | A kind of video call method and mobile terminal | |
CN107423409A (en) | A kind of image processing method, image processing apparatus and electronic equipment | |
CN106713764A (en) | Photographic method and mobile terminal | |
CN106657793A (en) | Image processing method and mobile terminal | |
US10290120B2 (en) | Color analysis and control using an electronic mobile device transparent display screen | |
CN107172346A (en) | A kind of weakening method and mobile terminal | |
CN107147852A (en) | Image capturing method, mobile terminal and computer-readable recording medium | |
CN107172347B (en) | Photographing method and terminal | |
CN107155064A (en) | A kind of image pickup method and mobile terminal | |
CN107590469A (en) | A kind of method for detecting human face and mobile terminal | |
CN107392933A (en) | A kind of method and mobile terminal of image segmentation | |
CN106888354A (en) | A kind of singlehanded photographic method and mobile terminal | |
CN107404577A (en) | A kind of image processing method, mobile terminal and computer-readable recording medium | |
CN106791375A (en) | One kind shoots focusing method and mobile terminal | |
CN107613193A (en) | A kind of camera control method and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |