CN106657600A - Image processing method and mobile terminal - Google Patents
Image processing method and mobile terminal Download PDFInfo
- Publication number
- CN106657600A CN106657600A CN201610941867.6A CN201610941867A CN106657600A CN 106657600 A CN106657600 A CN 106657600A CN 201610941867 A CN201610941867 A CN 201610941867A CN 106657600 A CN106657600 A CN 106657600A
- Authority
- CN
- China
- Prior art keywords
- image
- camera
- target subject
- characteristic
- mobile terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72439—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
An embodiment of the invention provides an image processing method and a mobile terminal. The method is characterized by carrying out distance measurement on an object main body by adopting a first camera and a second camera to obtain distance values between the mobile terminal and a plurality of feature points of the object main body; collecting a first image comprising the object main body through the first camera; and according to the plurality of distance values, extracting image information of the object main body from the first image. By adopting the first camera and the second camera to carry out distance measurement on the object main body, hardware resources are utilized effectively, and functions of the hardware resources are enriched; by obtaining the first image through the first camera, power consumption is reduced and resources are saved; and by extracting the image information of the object main body according to the plurality of distance values, the image information of the object main body can be extracted accurately and effectively, image matting accuracy is greatly improved, and user experience is enhanced.
Description
Technical field
The present invention relates to the technical field of mobile terminal, more particularly to a kind of image processing method and a kind of movement are eventually
End.
Background technology
As the continuous development of electronic product, the mobile terminal with shoot function are increasingly popularized, user can be at any time
Shot with it everywhere, the image of gained is sent to relatives and friends in forms such as multimedia notes, that is, clap and send out, it is convenient
Fast.
The mobile terminals such as smart mobile phone have no longer been simple meanss of communication, but the work(such as collection leisure, amusement and communication
The handicraft of energy.At the same time, the quality of image of requirement of the user to the shooting effect also more and more higher, and user to shooting
More and more higher is also required that, user would generally need to change background because background is uncoordinated, and of the prior art to portrait knowledge
Other precision is not high, is that the accuracy rate for scratching figure to " portrait " or " image " is not high so that the back of the body of the image after replacing in other words
Scape is not high with the degrees of coordination of original image portrait, is the precision that mobile terminal carries out Identification of Images the reason for cause this phenomenon
It is not high.
The content of the invention
In view of the above problems, the embodiment of the present invention provides a kind of image processing method and mobile terminal, to solve existing skill
The problems referred to above of certain " portrait " or " image " of image can not be exactly extracted in art
In order to solve the above problems, the embodiment of the invention discloses a kind of image processing method, is applied to mobile terminal, institute
Stating mobile terminal includes at least two cameras, including:
Found range for target subject using the first camera and second camera, obtain the mobile terminal with it is described
The distance between multiple characteristic points of target subject are worth;
By first camera, first image of the collection comprising target subject;
According to multiple distance values, the image information of the target subject is extracted from described first image.
The embodiment of the invention also discloses a kind of mobile terminal, the mobile terminal includes at least two cameras, including:
Distance value acquisition module, for being surveyed for target subject using the first camera and second camera simultaneously
Away from obtaining the distance between the mobile terminal and multiple characteristic points of target subject value;
First image obtains module, for by first camera, first image of the collection comprising target subject;
Image information extraction module, for according to multiple distance values, the mesh being extracted from described first image
The image information of mark main body.
The embodiment of the present invention includes advantages below:
In the embodiment of the present invention, found range for target subject simultaneously using the first camera and second camera, obtained
The distance between the mobile terminal and multiple characteristic points of target subject value, obtain the first camera collection
The first image comprising target subject, according to multiple distance values, extracts the target subject from described first image
Image information.Target subject is carried out using the first camera and second camera to be found range, effectively utilize hardware resource,
The purposes of hardware resource is enriched, the first image is obtained using the first camera, reduce power consumption, economized on resources;According to many
The individual distance value extracts the image information of the target subject, realizes exactly and efficiently extract the figure of target subject
As information, the accuracy of " scratching figure " is greatly improved, strengthen Consumer's Experience.
Description of the drawings
Technical scheme in order to be illustrated more clearly that the embodiment of the present invention, below will be to making needed for embodiment description
Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for
For those of ordinary skill in the art, on the premise of not paying creative work, can be obtaining other according to these accompanying drawings
Accompanying drawing
The step of Fig. 1 is a kind of image processing method embodiment one of embodiment of the present invention flow chart;
Fig. 2 is a kind of schematic diagram of the triangulation in the embodiment of the present invention;
The step of Fig. 3 is a kind of image processing method embodiment two of embodiment of the present invention flow chart;
Fig. 4 is a kind of structured flowchart of mobile terminal of device embodiment three in the embodiment of the present invention;
Fig. 5 is a kind of structured flowchart of mobile terminal of device embodiment four in the embodiment of the present invention;
Fig. 6 is a kind of structural representation of mobile terminal of device embodiment five in the embodiment of the present invention.
Specific embodiment
In order that technical problem, technical scheme and beneficial effect that the embodiment of the present invention is solved become more apparent, with
Lower combination drawings and Examples, further describe to the embodiment of the present invention.It should be appreciated that described herein be embodied as
Example is not intended to limit the present invention only to explain the present invention.
Embodiment of the method one
With reference to Fig. 1, flow chart the step of show a kind of image processing method embodiment one of the embodiment of the present invention, application
In mobile terminal, the mobile terminal includes at least two cameras, specifically may include steps of:
Step 101, is found range using the first camera and second camera for target subject, obtains described mobile whole
End and the distance between multiple characteristic points of target subject value;
In the embodiment of the present invention, using the principle of triangulation, using the first camera and second camera while target
Main body is found range, and obtains the distance between multiple characteristic points of mobile terminal and target subject value, and the distance value can be
The numerical value of mobile terminal to the distance of difference on target subject " surface ", for example, when target subject is personage, the distance value
Can be mobile terminal to target subject on the difference on target subject surface such as " eye ", " oral area ", " ear ", " nose "
Distance, that is, the characteristic point can be the difference on target subject surface, described above " eye ", " oral area ", " ear
Portion ", the point of " nose ", specifically, obtain the focal length parameter of first camera and second camera, further obtain institute
The centre-to-centre spacing parameter between the first camera and second camera is stated, first camera and second camera is finally obtained
Characteristic disparity parameter, using the focal length parameter and centre-to-centre spacing parameter and characteristic disparity parameter the mobile terminal and institute are calculated
State the distance between multiple characteristic points of target subject value.Distance value can be calculated according to below equation:Z=fT/d;Its
In, Z is distance value, that is, the distance of mobile terminal to target subject, f is the focal length ginseng of the first camera and second camera
Number, T is the centre-to-centre spacing parameter of the first camera and second camera, and d is the characteristic disparity of the first camera and second camera
Parameter, a kind of schematic diagram of the triangulation in the embodiment of the present invention with reference to shown in Fig. 2, OrFor the lens light of the first camera
The heart, OlFor the optical center of lens of second camera, d is XRAnd XTDifference, wherein, the XRAnd XTIt is respectively to make to believe through processor
The small aberration value imaged from second as starting point that breath matching is obtained.
It should be noted that user adjusts the focal length ginseng obtained by the camera when focal length parameter can shoot according to
Number, focal length parameter specifically can determine that for example, focal length parameter value may range from 10mm- according to the parameter area of camera
45mm, in embodiments of the present invention, the focal length parameter of the first camera and second camera is identical numerical value.
Step 102, by first camera, first image of the collection comprising target subject;
Further, user adopts first camera and shoots the first figure comprising target style with the first focus information
As after, including the first image of target subject described in acquisition for mobile terminal, it should be noted that target subject can include portrait
And/or image and/or scene, the embodiment of the present invention is not specifically restricted.
Step 103, according to multiple distance values, extracts the image letter of the target subject from described first image
Breath.
The embodiment of the present invention is applied to, using multiple distance values property coordinate system is set up, obtain feature difference data
And feature dimension data, the image size data of described first image is obtained, using described image sized data, characteristic size number
According to and feature difference data, calculate character pixel data, extract described first image using the character pixel data
Image information.It should be noted that the property coordinate system can include rectangular coordinate system in space, rectangular coordinate system in space is set up
Afterwards, equivalent to obtain in the first image including there is actual size information described in target subject, can be with from mobile terminal
Parameter setting in obtain image size data (size of image), as such, it is possible to according to the proportionate relationship between above-mentioned data,
The image information of target subject is extracted, specifically, described image information can include the image dimension information of target subject.
In the embodiment of the present invention, found range for target subject simultaneously using the first camera and second camera, obtained
The distance between the mobile terminal and multiple characteristic points of target subject value, obtain the first camera collection
The first image comprising target subject, according to multiple distance values, extracts the target subject from described first image
Image information.Target subject is carried out using the first camera and second camera to be found range, effectively utilize hardware resource,
The purposes of hardware resource is enriched, the first image is obtained using the first camera, reduce power consumption, economized on resources;According to many
The individual distance value extracts the image information of the target subject, realizes exactly and efficiently extract the figure of target subject
As information, the accuracy of " scratching figure " is greatly improved, strengthen Consumer's Experience.
Embodiment of the method two
With reference to Fig. 3, flow chart the step of show a kind of image processing method embodiment two of the embodiment of the present invention, application
In mobile terminal, the mobile terminal includes at least two cameras, specifically may include steps of:
Step 201, obtains the focal length parameter of first camera and second camera;
In the embodiment of the present invention, the first focal length parameter for taking the photograph head and second camera is obtained, focal length parameter can be user
According to the parameter that actual conditions are arranged.Focal length parameter referred in the camera of mobile terminal, (the photoreceptor from center of lens to CCD
Part, Charge-coupled Device) imaging plane distance, focus information can be according to the focal length of different mobile terminals
The size of information is configured, and in the embodiment of the present invention, the first camera and second camera are set to same numerical value.
Step 202, obtains the centre-to-centre spacing parameter between first camera and second camera;
Further, the centre-to-centre spacing parameter be between the photocentre of the first camera and the photocentre of second camera away from
From it is to calculate the preparation of the plurality of distance value to obtain centre-to-centre spacing parameter.
Step 203, obtains the characteristic disparity parameter of first camera and second camera;
The embodiment of the present invention is applied to, d is the characteristic disparity parameter of the first camera and second camera, and d is XRAnd XT's
Difference, wherein, the XRAnd XTIt is respectively the small aberration imaged from second as starting point obtained as information matches through processor
Value.
Step 204, using the focal length parameter and centre-to-centre spacing parameter and characteristic disparity parameter the mobile terminal is calculated
With the distance between multiple characteristic points of target subject value;
For reality, according to formula:Z=fT/d obtains the distance value Z, wherein, the f is the first camera and second
The focal length parameter of camera, T is the centre-to-centre spacing parameter of the first camera and second camera, and d is that the first camera and second are taken the photograph
As the characteristic disparity parameter of head.
Step 205, by first camera, first image of the collection comprising target subject;
It is described that target subject is included by first camera collection in a kind of preferred embodiment of the embodiment of the present invention
The first image the step of include:
Obtain the first image that first camera is shot with the first focus information.
Specifically, after finding range to target entity, after setting up rectangular coordinate system in space, in obtaining the first image
Including there is actual size information described in target subject, the embodiment of the present invention is also obtained first camera and is believed with the first focus
The first image that breath shoots, described first image is the image comprising target subject, in the embodiment of the present invention, can also adopt it
Its acquisition parameters is configured to the first camera, then shoots the first image using first camera, for example, first
Can also adopt user that aperture information and/or the exposure of first camera are set according to actual conditions on the basis of focus information
Optical information and/or the further shooting image of gain information, the embodiment of the present invention is not restricted to this.
Step 206, according to multiple distance values, extracts the image letter of the target subject from described first image
Breath.
It is described according to multiple distance values in a kind of preferred embodiment of the embodiment of the present invention, from described first image
In include the step of extract the image information of the target subject:Property coordinate system is set up using multiple distance values, is obtained
Obtain feature difference data and feature dimension data;Obtain the image size data of described first image;Based on described image size
Data, feature dimension data and feature difference data, obtain character pixel data;Institute is extracted based on the character pixel data
The image information of the first image is stated, wherein, the feature difference data can be actual chi of the portrait profile in actual environment
Very little information, the feature dimension data can be the dimension information of actually finding a view of the first image, and the character pixel data can be with
For the image dimension information of the portrait profile in the first image, described image sized data can be the image of acquisition for mobile terminal
The size of size, is different from actual size information.
Specifically, set up after property coordinate system, equivalent to the target subject got in the first image in actual rings
All of actual size information in border, according to the proportionate relationship of image dimension information and actual size information described the is extracted
The image information of one image,
In another kind of preferred embodiment of the embodiment of the present invention, methods described also includes:Obtain the second image;By the mesh
The image information of mark main body is added to second image, generates the 3rd image.
In practical application, the second figure that the second image or user that the user is chosen is shot using the first camera
Picture, with the image information of the target subject operation is added, and the 3rd image is generated, it should be noted that described second
The source of image can be that user is stored in advance in the image of mobile terminal, or shoot after the first image, and user is immediately
The image of shooting, the embodiment of the present invention is not restricted to this.
In another kind of preferred embodiment of the embodiment of the present invention, the image information by the target subject is added to institute
The second image is stated, generating the step of the 3rd image adds includes:Setting position of the receive user for described image information;According to
The setting position is added described image information with second image, generates the 3rd image.
The setting position selected according to user is added described image information with the second image, generates the second image,
The 3rd image is exported to mobile terminal.
It is described to add the target subject on second image in another kind of preferred embodiment of the embodiment of the present invention
Image information, generate three images the step of, also include:Second image is carried out into characteristic manipulation, characteristic image is obtained;
Wherein, the characteristic manipulation includes that gradation conversion is operated and/or zoom operations and/or rotation process;By the target subject
Image information is added with the characteristic image, generates the 3rd image.
Furthermore it is also possible to the second image is carried out after various features operation, by the image information of target subject and the spy
Levy image to be added according to the setting position that user selectes, generate the 3rd image, wherein, the characteristic manipulation turns including gray scale
Operation and/or zoom operations and/or rotation process are changed, the embodiment of the present invention is not restricted to this.
In the embodiment of the present invention, property coordinate system is set up using multiple distance values, obtain feature difference data and spy
Levy sized data, obtain the image size data of described first image, using described image sized data, feature dimension data and
Feature difference data, calculate character pixel data, and using the character pixel data image of described first image is extracted
Information.Receive user for described image information setting position, according to the setting position by described image information with it is described
Second image is added, and generates the 3rd image, or second image is carried out into characteristic manipulation, obtains characteristic image, will
The image information of the target subject is added with the characteristic image, generates the 3rd image.In the embodiment of the present invention, accurately
Extract the image information of target subject, " scratching figure " problem under half-light environment is solved, by the image information of target subject
It is added according to the demand of user with the second image, further enhances the experience of user, improves the quality of image and attractive in appearance
Degree, strengthens the viscosity of user.
It should be noted that for embodiment of the method, in order to be briefly described, therefore it to be all expressed as a series of action group
Close, but those skilled in the art should know, and the embodiment of the present invention is not limited by described sequence of movement, because according to
According to the embodiment of the present invention, some steps can adopt other orders or while carry out.Secondly, those skilled in the art also should
Know, embodiment described in this description belongs to preferred embodiment, the involved action not necessarily present invention is implemented
Example is necessary.
Device embodiment three
Fig. 4 is the block diagram of the mobile terminal of one embodiment of the invention.Mobile terminal 300 shown in Fig. 4 includes distance value
Acquisition module 301, the first image obtain module 302 and image information extraction module 303.
Distance value acquisition module 301, for being carried out for target subject using the first camera and second camera simultaneously
Range finding, obtains the distance between the mobile terminal and multiple characteristic points of target subject value;
First image obtains module 302, for by first camera, first image of the collection comprising target subject;
Image information extraction module 303, it is described for according to multiple distance values, extracting from described first image
The image information of target subject.
Alternatively, the terminal 300 also includes:
Second image obtains module, for obtaining the second image;
Image generation module, for the image information of the target subject to be added to second image, generates the 3rd
Image.
Alternatively, described image generation module includes:
Setting position receiving submodule, for the setting position that receive user is directed to described image information;
First image generates submodule, for entering described image information with second image according to the setting position
Row addition, generates the 3rd image.
Alternatively, described image generation module also includes:
Characteristic image obtains submodule, for second image to be carried out into characteristic manipulation, obtains characteristic image;Wherein,
The characteristic manipulation includes that gradation conversion is operated and/or zoom operations and/or rotation process;
First image generates submodule, for the image information of the target subject to be added with the characteristic image
Plus, generate the 3rd image.
Alternatively, the distance value acquisition module 301 includes:
Focal length parameter acquiring submodule, for obtaining the focal length parameter of first camera and second camera;
Centre-to-centre spacing parameter acquiring submodule, for obtaining the ginseng of the centre-to-centre spacing between first camera and second camera
Number;
Characteristic disparity parameter acquiring submodule, the characteristic disparity for obtaining first camera and second camera is joined
Number;
Distance value calculating sub module, for being calculated using the focal length parameter and centre-to-centre spacing parameter and characteristic disparity parameter
The distance between the mobile terminal and multiple characteristic points of target subject value.
Alternatively, described image information extraction modules 303 include:
Feature difference data and feature dimension data obtain submodule, sit for setting up feature using multiple distance values
Mark system, obtains feature difference data and feature dimension data;
Image size data acquisition submodule, for obtaining the image size data of described first image;
Character pixel data calculating sub module, for poor based on described image sized data, feature dimension data and feature
Value Data, calculates character pixel data;
Image information extracting sub-module, for extracting the image letter of described first image based on the character pixel data
Breath.
Device embodiment four
Fig. 5 is the block diagram of the mobile terminal of another embodiment of the present invention.Mobile terminal 400 shown in Fig. 5 includes:At least
One processor 401, memory 402, at least one network interface 404, other users interface 403 and component 406 of taking pictures.It is mobile
Each component in terminal 400 is coupled by bus system 405.It is understood that bus system 405 is used to realize these groups
Connection communication between part.Bus system 405 except including in addition to data/address bus, also including power bus, controlling bus and state
Signal bus.But for the sake of for clear explanation, in Figure 5 various buses are all designated as into bus system 405, component 4100 of taking pictures
Including the first camera and second camera, wherein, the first camera and second camera are used for range finding and shooting image.
Wherein, user interface 403 can include display, keyboard or pointing device (for example, mouse, trace ball
(trackball), touch-sensitive plate or touch-screen etc..
It is appreciated that the memory 402 in the embodiment of the present invention can be volatile memory or nonvolatile memory,
Or may include both volatibility and nonvolatile memory.Wherein, nonvolatile memory can be read-only storage (Read-
OnlyMemory, ROM), programmable read only memory (ProgrammableROM, PROM), Erasable Programmable Read Only Memory EPROM
(ErasablePROM, EPROM), Electrically Erasable Read Only Memory (ElectricallyEPROM, EEPROM) dodge
Deposit.Volatile memory can be random access memory (RandomAccessMemory, RAM), and it is used as outside slow at a high speed
Deposit.By exemplary but be not restricted explanation, the RAM of many forms can use, such as static RAM
(StaticRAM, SRAM), dynamic random access memory (DynamicRAM, DRAM), Synchronous Dynamic Random Access Memory
(SynchronousDRAM, SDRAM), double data speed synchronous dynamic RAM (DoubleDataRate
SDRAM, DDRSDRAM), enhancement mode Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronized links
Dynamic random access memory (SynchlinkDRAM, SLDRAM) and direct rambus random access memory
(DirectRambusRAM, DRRAM).The memory 402 of the system and method for embodiment of the present invention description is intended to include but does not limit
In these memories with any other suitable type.
In some embodiments, memory 402 stores following element, can perform module or data structure, or
Person their subset, or their superset:Operating system 4021 and application program 4022.
Wherein, operating system 4021, comprising various system programs, such as ccf layer, core library layer, driving layer etc., are used for
Realize various basic businesses and process hardware based task.Application program 4022, comprising various application programs, such as media
Player (MediaPlayer), browser (Browser) etc., for realizing various applied business.Realize embodiment of the present invention side
The program of method may be embodied in application program 4022.
In embodiments of the present invention, by call memory 402 store program or instruction, specifically, can be application
The program stored in program 4022 or instruction, processor 401 is used to be directed to target master using the first camera and second camera
Body is found range, and obtains the distance between the mobile terminal and multiple characteristic points of target subject value;By described
One camera, first image of the collection comprising target subject;According to multiple distance values, extract from described first image
The image information of the target subject.
The method that the embodiments of the present invention are disclosed can apply in processor 401, or be realized by processor 401.
A kind of possibly IC chip of processor 401, the disposal ability with signal.During realization, said method it is each
Step can be completed by the instruction of the integrated logic circuit of the hardware in processor 401 or software form.Above-mentioned process
Device 401 can be general processor, digital signal processor (DigitalSignalProcessor, DSP), special IC
(ApplicationSpecificIntegratedCircuit, ASIC), ready-made programmable gate array
(FieldProgrammableGateArray, FPGA) either other PLDs, discrete gate or transistor logic
Device, discrete hardware components.Can realize or perform disclosed each method in the embodiment of the present invention, step and box
Figure.General processor can be microprocessor or the processor can also be any conventional processor etc..With reference to the present invention
The step of method disclosed in embodiment, can be embodied directly in hardware decoding processor and perform and complete, or use decoding processor
In hardware and software module combination execution complete.Software module may be located at random access memory, and flash memory, read-only storage can
In the ripe storage medium in this area such as program read-only memory or electrically erasable programmable memory, register.The storage
Medium is located at memory 402, and processor 401 reads the information in memory 402, with reference to its hardware the step of said method is completed
Suddenly.
It is understood that the embodiment of the present invention description these embodiments can with hardware, software, firmware, middleware,
Microcode or its combination are realizing.For hardware is realized, processing unit can be realized in one or more special ICs
(ApplicationSpecificIntegratedCircuits, ASIC), digital signal processor
(DigitalSignalProcessing, DSP), digital signal processing appts (DSPDevice, DSPD), programmable logic device
(ProgrammableLogicDevice, PLD), field programmable gate array (Field-ProgrammableGateArray,
FPGA), general processor, controller, microcontroller, microprocessor, other the electronics lists for performing herein described function
In unit or its combination.
For software realize, can pass through perform the embodiment of the present invention described in function module (such as process, function etc.) come
Realize the technology described in the embodiment of the present invention.Software code is storable in memory and by computing device.Memory can
To realize within a processor or outside processor.
Alternatively, processor 401 is additionally operable to:Obtain the second image;
Alternatively, processor 401 is additionally operable to:The image information of the target subject is added to second image, it is raw
Into the 3rd image.
Alternatively, processor 401 is additionally operable to:Setting position of the receive user for described image information;
Alternatively, processor 401 is additionally operable to:Described image information is entered with second image according to the setting position
Row addition, generates the 3rd image.
Alternatively, processor 401 is additionally operable to:Second image is carried out into characteristic manipulation, characteristic image is obtained;Wherein,
The characteristic manipulation includes that gradation conversion is operated and/or zoom operations and/or rotation process;
Alternatively, processor 401 is additionally operable to:The image information of the target subject is added with the characteristic image
Plus, generate the 3rd image.
Alternatively, processor 401 is additionally operable to:Obtain the focal length parameter of first camera and second camera;
Alternatively, processor 401 is additionally operable to:Obtain the centre-to-centre spacing ginseng between first camera and second camera
Number;
Alternatively, processor 401 is additionally operable to:Obtain the characteristic disparity parameter of first camera and second camera;
Alternatively, processor 401 is additionally operable to:Calculated using the focal length parameter, centre-to-centre spacing parameter and characteristic disparity parameter
Go out the distance between the mobile terminal and multiple characteristic points of target subject value.
Alternatively, processor 401 is additionally operable to:Property coordinate system is set up using multiple distance values, feature difference is obtained
Data and feature dimension data;
Alternatively, processor 401 is additionally operable to:Obtain the image size data of described first image;
Alternatively, processor 401 is additionally operable to:Based on described image sized data, feature dimension data and feature difference number
According to acquisition character pixel data;
Alternatively, processor 401 is additionally operable to:Based on the character pixel data, the image of described first image is extracted.
Mobile terminal 400 can realize each process that mobile terminal is realized in previous embodiment, to avoid repeating, here
Repeat no more.
In the embodiment of the present invention, found range for target subject simultaneously using the first camera and second camera, obtained
The distance between the mobile terminal and multiple characteristic points of target subject value, obtain the first camera collection
The first image comprising target subject, according to multiple distance values, extracts the target subject from described first image
Image information.Target subject is carried out using the first camera and second camera to be found range, effectively utilize hardware resource,
The purposes of hardware resource is enriched, the first image is obtained using the first camera, reduce power consumption, economized on resources;According to many
The individual distance value extracts the image information of the target subject, realizes exactly and efficiently extract the figure of target subject
As information, the accuracy of " scratching figure " is greatly improved, strengthen Consumer's Experience.
Device embodiment five
Fig. 6 is the structural representation of the mobile terminal of another embodiment of the present invention.Specifically, the mobile terminal in Fig. 6
500 can be mobile phone, panel computer, personal digital assistant (PersonalDigital Assistant, PDA) or vehicle-mounted computer
Deng.
Mobile terminal 500 in Fig. 6 includes that radio frequency (RadioFrequency, RF) circuit 510, memory 520, input are single
Unit 530, display unit 540, processor 560, voicefrequency circuit 570, WiFi (WirelessFidelity) module 580, power supply 590
With component 5110 of taking pictures.
Wherein, input block 530 can be used for the numeral or character information of receiving user's input, and produce and mobile terminal
The signal input that 500 user is arranged and function control is relevant.Specifically, in the embodiment of the present invention, the input block 530 can
With including contact panel 531.Contact panel 531, also referred to as touch-screen, can collect user thereon or neighbouring touch operation
(such as user uses the operations of any suitable object or annex on contact panel 531 such as finger, stylus), and according to advance
The formula of setting drives corresponding attachment means.Optionally, contact panel 531 may include touch detecting apparatus and touch controller
Two parts.Wherein, touch detecting apparatus detect the touch orientation of user, and detect the signal that touch operation brings, by signal
Send touch controller to;Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate,
Give the processor 560 again, and the order sent of receiving processor 560 and can be performed.Furthermore, it is possible to using resistance-type,
The polytypes such as condenser type, infrared ray and surface acoustic wave realize contact panel 531.Except contact panel 531, input block
530 can also include other input equipments 532, and other input equipments 532 can include but is not limited to physical keyboard, function key
One or more in (such as volume control button, switch key etc.), trace ball, mouse, action bars etc..
Wherein, display unit 540 can be used for display by the information of user input or be supplied to information and the movement of user
The various menu interfaces of terminal 500.Display unit 540 may include display floater 541, optionally, can adopt LCD or organic
The forms such as optical diode (OrganicLight-EmittingDiode, OLED) are configuring display floater 541.
It should be noted that contact panel 531 can cover display floater 541, touch display screen is formed, when the touch display screen inspection
Measure thereon or after neighbouring touch operation, processor 560 is sent to determine the type of touch event, with preprocessor
560 provide corresponding visual output according to the type of touch event in touch display screen.
Touch display screen includes Application Program Interface viewing area and conventional control viewing area.The Application Program Interface viewing area
And the arrangement mode of the conventional control viewing area is not limited, can be arranged above and below, left-right situs etc. can distinguish two and show
Show the arrangement mode in area.The Application Program Interface viewing area is displayed for the interface of application program.Each interface can be with
The interface element such as the icon comprising at least one application program and/or widget desktop controls.The Application Program Interface viewing area
It can also be the empty interface not comprising any content.The conventional control viewing area be used for show the higher control of utilization rate, for example,
Application icons such as settings button, interface numbering, scroll bar, phone directory icon etc..
Component 5100 of taking pictures includes the first camera and second camera, wherein, the first camera and second camera are used
In range finding and shooting image.
Wherein processor 560 is the control centre of mobile terminal 500, using various interfaces and connection whole mobile phone
Various pieces, the software program and/or module being stored in by operation or execution in first memory 521, and call storage
Data in second memory 522, perform the various functions and processing data of mobile terminal 500, so as to mobile terminal 500
Carry out integral monitoring.Optionally, processor 560 may include one or more processing units.
In embodiments of the present invention, by call store the first memory 521 in software program and/or module and/
Or the data in the second memory 522, processor 560 is used to be directed to target simultaneously using the first camera and second camera
Main body is found range, and obtains the distance between the mobile terminal and multiple characteristic points of target subject value;By described
First camera, first image of the collection comprising target subject;According to multiple distance values, extract from described first image
Go out the image information of the target subject.
Alternatively, processor 560 is additionally operable to:Obtain the second image;
Alternatively, processor 560 is additionally operable to:The image information of the target subject is added to second image, it is raw
Into the 3rd image.
Alternatively, processor 560 is additionally operable to:Setting position of the receive user for described image information;
Alternatively, processor 560 is additionally operable to:Described image information is entered with second image according to the setting position
Row addition, generates the 3rd image.
Alternatively, processor 560 is additionally operable to:Second image is carried out into characteristic manipulation, characteristic image is obtained;Wherein,
The characteristic manipulation includes that gradation conversion is operated and/or zoom operations and/or rotation process;
Alternatively, processor 560 is additionally operable to:The image information of the target subject is added with the characteristic image
Plus, generate the 3rd image.
Alternatively, processor 560 is additionally operable to:Obtain the focal length parameter of first camera and second camera;
Alternatively, processor 560 is additionally operable to:Obtain the centre-to-centre spacing ginseng between first camera and second camera
Number;
Alternatively, processor 560 is additionally operable to:Obtain the characteristic disparity parameter of first camera and second camera;
Alternatively, processor 560 is additionally operable to:Calculated using the focal length parameter, centre-to-centre spacing parameter and characteristic disparity parameter
Go out the distance between the mobile terminal and multiple characteristic points of target subject value.
Alternatively, processor 560 is additionally operable to:Property coordinate system is set up using multiple distance values, feature difference is obtained
Data and feature dimension data;
Alternatively, processor 560 is additionally operable to:Obtain the image size data of described first image;
Alternatively, processor 560 is additionally operable to:Based on described image sized data, feature dimension data and feature difference number
According to acquisition character pixel data;
Alternatively, processor 560 is additionally operable to:Based on the character pixel data, the image of described first image is extracted.
It can be seen that, in the embodiment of the present invention, property coordinate system is set up using multiple distance values, obtain feature difference data
And feature dimension data, the image size data of described first image is obtained, using described image sized data, characteristic size number
According to and feature difference data, calculate character pixel data, extract described first image using the character pixel data
Image information.Receive user for described image information setting position, according to the setting position by described image information with
Second image is added, and generates the 3rd image, or second image is carried out into characteristic manipulation, obtains characteristic pattern
Picture, the image information of the target subject is added with the characteristic image, generates the 3rd image.The embodiment of the present invention
In, the image information of target subject is extracted exactly, " scratching figure " problem under half-light environment is solved, by the figure of target subject
As information is added with the second image according to the demand of user, the experience of user is further enhanced, improve the quality of image
And aesthetics, strengthen the viscosity of user.
Those of ordinary skill in the art it is to be appreciated that with reference to disclosed in the embodiment of the present invention embodiment description it is each
The unit and algorithm steps of example, being capable of being implemented in combination in electronic hardware or computer software and electronic hardware.These
Function is performed with hardware or software mode actually, depending on the application-specific and design constraint of technical scheme.Specialty
Technical staff can use different methods to realize described function to each specific application, but this realization should not
Think beyond the scope of this invention.
Those skilled in the art can be understood that, for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, may be referred to the corresponding process in preceding method embodiment, will not be described here.
In embodiment provided herein, it should be understood that disclosed apparatus and method, can pass through other
Mode is realized.For example, device embodiment described above is only schematic, and for example, the division of the unit is only
A kind of division of logic function, can there is an other dividing mode when actually realizing, such as multiple units or component can with reference to or
Person is desirably integrated into another system, or some features can be ignored, or does not perform.Another, shown or discussed is mutual
Between coupling or direct-coupling or communication connection can be INDIRECT COUPLING or communication link by some interfaces, device or unit
Connect, can be electrical, mechanical or other forms.
The unit as separating component explanation can be or may not be it is physically separate, it is aobvious as unit
The part for showing can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can according to the actual needs be selected to realize the mesh of this embodiment scheme
's.
In addition, each functional unit in each embodiment of the invention can be integrated in a processing unit, it is also possible to
It is that unit is individually physically present, it is also possible to which two or more units are integrated in a unit.
If the function is realized and as independent production marketing or when using using in the form of SFU software functional unit, can be with
In being stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially in other words
The part contributed to prior art or the part of the technical scheme can be embodied in the form of software product, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be individual
People's computer, server, or network equipment etc.) perform all or part of step of each embodiment methods described of the invention.
And aforesaid storage medium includes:USB flash disk, portable hard drive, ROM, RAM, magnetic disc or CD etc. are various can be with store program codes
Medium.
The above, the only specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, any
Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, all should contain
Cover within protection scope of the present invention.Therefore, protection scope of the present invention should be defined by scope of the claims.
Claims (12)
1. a kind of image processing method, is applied to mobile terminal, it is characterised in that the mobile terminal includes at least two shootings
Head, including:
Found range for target subject using the first camera and second camera, obtained the mobile terminal and the target
The distance between multiple characteristic points of main body are worth;
By first camera, first image of the collection comprising target subject;
According to multiple distance values, the image information of the target subject is extracted from described first image.
2. method according to claim 1, it is characterised in that methods described also includes:
Obtain the second image;
Addition adds the image information of the target subject to second image, generates the 3rd image.
3. method according to claim 2, it is characterised in that the image information by the target subject is added to institute
The step of stating the second image, three image of generation includes:
The setting position of the image information of the selected target subject of receive user;
According to the setting position, described image information is added to the setting position of second image, generate the 3rd image.
4. method according to claim 2, it is characterised in that the image information by the target subject is added to institute
The second image is stated, generating the 3rd image also includes:
Characteristic manipulation is carried out to second image, characteristic image is obtained;Wherein, the characteristic manipulation is operated including gradation conversion
And/or zoom operations and/or rotation process;
The image information of the target subject is added with the characteristic image, the 3rd image is generated.
5. method according to claim 1, it is characterised in that described to adopt the first camera and the same hour hands of second camera
Target subject is found range, the step of the distance between the mobile terminal and multiple characteristic points of target subject value is obtained
Suddenly include:
Obtain the focal length parameter of first camera and second camera;
Obtain the centre-to-centre spacing parameter between first camera and second camera;
Obtain the characteristic disparity parameter of first camera and second camera;
The mobile terminal is calculated with the target subject using the focal length parameter, centre-to-centre spacing parameter and characteristic disparity parameter
The distance between multiple characteristic points value.
6. method according to claim 1, it is characterised in that described according to multiple distance values, from first figure
The step of extracting the image information of the target subject as in includes:
Property coordinate system is set up using multiple distance values, feature difference data and feature dimension data is obtained;
Obtain the image size data of described first image;
Based on described image sized data, feature dimension data and feature difference data, character pixel data are obtained;
Based on the character pixel data, the image of described first image is extracted.
7. a kind of mobile terminal, it is characterised in that the mobile terminal includes at least two cameras, including:
Distance value acquisition module, for being found range for target subject using the first camera and second camera simultaneously, is obtained
Take the distance between the mobile terminal and multiple characteristic points of target subject value;
First image obtains module, for by first camera, first image of the collection comprising target subject;
Image information extraction module, for according to multiple distance values, the target master being extracted from described first image
The image information of body.
8. terminal according to claim 7, it is characterised in that the terminal also includes:
Second image obtains module, for obtaining the second image;
Image generation module, for the image information of the target subject to be added to second image, generates the 3rd image
Addition.
9. terminal according to claim 8, it is characterised in that described image generation module includes:
Setting position receiving submodule, for the setting position that receive user is directed to described image information;
3rd image generates submodule, for being added described image information with second image according to the setting position
Plus, generate the 3rd image.
10. terminal according to claim 8, it is characterised in that described image generation module also includes:
Characteristic image obtains submodule, for second image to be carried out into characteristic manipulation, obtains characteristic image;Wherein, it is described
Characteristic manipulation includes that gradation conversion is operated and/or zoom operations and/or rotation process;
3rd image generates submodule, raw for the image information of the target subject to be added with the characteristic image
Into the 3rd image.
11. terminals according to claim 7, it is characterised in that the distance value acquisition module includes:
Focal length parameter acquiring submodule, for obtaining the focal length parameter of first camera and second camera;
Centre-to-centre spacing parameter acquiring submodule, for obtaining the centre-to-centre spacing parameter between first camera and second camera;
Characteristic disparity parameter acquiring submodule, for obtaining the characteristic disparity parameter of first camera and second camera;
Distance value calculating sub module, it is described for being calculated using the focal length parameter and centre-to-centre spacing parameter and characteristic disparity parameter
The distance between mobile terminal and multiple characteristic points of target subject value.
12. terminals according to claim 7, it is characterised in that described image information extraction modules include:
Feature difference data and feature dimension data obtain submodule, for setting up characteristic coordinates using multiple distance values
System, obtains feature difference data and feature dimension data;
Image size data acquisition submodule, for obtaining the image size data of described first image;
Character pixel data calculating sub module, for based on described image sized data, feature dimension data and feature difference number
According to acquisition character pixel data;
Image information extracting sub-module, for extracting the image information of described first image based on the character pixel data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610941867.6A CN106657600B (en) | 2016-10-31 | 2016-10-31 | A kind of image processing method and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610941867.6A CN106657600B (en) | 2016-10-31 | 2016-10-31 | A kind of image processing method and mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106657600A true CN106657600A (en) | 2017-05-10 |
CN106657600B CN106657600B (en) | 2019-10-15 |
Family
ID=58821506
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610941867.6A Active CN106657600B (en) | 2016-10-31 | 2016-10-31 | A kind of image processing method and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106657600B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109859265A (en) * | 2018-12-28 | 2019-06-07 | 维沃通信科技有限公司 | A kind of measurement method and mobile terminal |
WO2021129073A1 (en) * | 2019-12-23 | 2021-07-01 | 华为技术有限公司 | Distance measurement method and device |
WO2022041737A1 (en) * | 2020-08-28 | 2022-03-03 | 北京石头世纪科技股份有限公司 | Distance measuring method and apparatus, robot, and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101344965A (en) * | 2008-09-04 | 2009-01-14 | 上海交通大学 | Tracking system based on binocular camera shooting |
US20090128646A1 (en) * | 2005-10-26 | 2009-05-21 | Masanori Itoh | Video reproducing device, video recorder, video reproducing method, video recording method, and semiconductor integrated circuit |
CN102779274A (en) * | 2012-07-19 | 2012-11-14 | 冠捷显示科技(厦门)有限公司 | Intelligent television face recognition method based on binocular camera |
CN104422441A (en) * | 2013-09-02 | 2015-03-18 | 联想(北京)有限公司 | Electronic equipment and positioning method |
KR20150111197A (en) * | 2014-03-25 | 2015-10-05 | 삼성전자주식회사 | Depth camera device, 3d image display system having the same and control methods thereof |
CN105847674A (en) * | 2016-03-25 | 2016-08-10 | 维沃移动通信有限公司 | Preview image processing method based on mobile terminal, and mobile terminal therein |
EP3079349A2 (en) * | 2015-03-17 | 2016-10-12 | MediaTek, Inc | Automatic image capture during preview and image recommendation |
-
2016
- 2016-10-31 CN CN201610941867.6A patent/CN106657600B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090128646A1 (en) * | 2005-10-26 | 2009-05-21 | Masanori Itoh | Video reproducing device, video recorder, video reproducing method, video recording method, and semiconductor integrated circuit |
CN101344965A (en) * | 2008-09-04 | 2009-01-14 | 上海交通大学 | Tracking system based on binocular camera shooting |
CN102779274A (en) * | 2012-07-19 | 2012-11-14 | 冠捷显示科技(厦门)有限公司 | Intelligent television face recognition method based on binocular camera |
CN104422441A (en) * | 2013-09-02 | 2015-03-18 | 联想(北京)有限公司 | Electronic equipment and positioning method |
KR20150111197A (en) * | 2014-03-25 | 2015-10-05 | 삼성전자주식회사 | Depth camera device, 3d image display system having the same and control methods thereof |
EP3079349A2 (en) * | 2015-03-17 | 2016-10-12 | MediaTek, Inc | Automatic image capture during preview and image recommendation |
CN105847674A (en) * | 2016-03-25 | 2016-08-10 | 维沃移动通信有限公司 | Preview image processing method based on mobile terminal, and mobile terminal therein |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109859265A (en) * | 2018-12-28 | 2019-06-07 | 维沃通信科技有限公司 | A kind of measurement method and mobile terminal |
CN109859265B (en) * | 2018-12-28 | 2024-04-19 | 维沃移动通信有限公司 | Measurement method and mobile terminal |
WO2021129073A1 (en) * | 2019-12-23 | 2021-07-01 | 华为技术有限公司 | Distance measurement method and device |
WO2022041737A1 (en) * | 2020-08-28 | 2022-03-03 | 北京石头世纪科技股份有限公司 | Distance measuring method and apparatus, robot, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN106657600B (en) | 2019-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105847674B (en) | A kind of preview image processing method and mobile terminal based on mobile terminal | |
CN105872148B (en) | A kind of generation method and mobile terminal of high dynamic range images | |
CN106027900A (en) | Photographing method and mobile terminal | |
CN107395965A (en) | A kind of image processing method and mobile terminal | |
CN105187814B (en) | Image processing method and associated apparatus | |
CN107147837A (en) | The method to set up and mobile terminal of a kind of acquisition parameters | |
CN106454086A (en) | Image processing method and mobile terminal | |
CN107172346B (en) | Virtualization method and mobile terminal | |
CN106937055A (en) | A kind of image processing method and mobile terminal | |
CN106506962A (en) | A kind of image processing method and mobile terminal | |
CN107155064B (en) | A kind of image pickup method and mobile terminal | |
CN106101545A (en) | A kind of image processing method and mobile terminal | |
CN107124543A (en) | A kind of image pickup method and mobile terminal | |
CN106488133A (en) | A kind of detection method of Moving Objects and mobile terminal | |
CN107317993A (en) | A kind of video call method and mobile terminal | |
CN107147852A (en) | Image capturing method, mobile terminal and computer-readable recording medium | |
CN104427252A (en) | Method for synthesizing images and electronic device thereof | |
CN105959565A (en) | Panoramic photographing method and mobile terminal | |
CN107172361A (en) | The method and mobile terminal of a kind of pan-shot | |
CN106791437A (en) | A kind of panoramic picture image pickup method and mobile terminal | |
CN106131397A (en) | A kind of method that multi-medium data shows and electronic equipment | |
CN106952235A (en) | A kind of image processing method and mobile terminal | |
CN106097398B (en) | A kind of detection method and mobile terminal of Moving Objects | |
CN107492067A (en) | A kind of image beautification method and mobile terminal | |
CN106657600A (en) | Image processing method and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |