CN110012229A - A kind of image processing method and terminal - Google Patents

A kind of image processing method and terminal Download PDF

Info

Publication number
CN110012229A
CN110012229A CN201910294697.0A CN201910294697A CN110012229A CN 110012229 A CN110012229 A CN 110012229A CN 201910294697 A CN201910294697 A CN 201910294697A CN 110012229 A CN110012229 A CN 110012229A
Authority
CN
China
Prior art keywords
image
camera
subject
terminal
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910294697.0A
Other languages
Chinese (zh)
Other versions
CN110012229B (en
Inventor
宋晓光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910294697.0A priority Critical patent/CN110012229B/en
Publication of CN110012229A publication Critical patent/CN110012229A/en
Application granted granted Critical
Publication of CN110012229B publication Critical patent/CN110012229B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Studio Devices (AREA)

Abstract

The present invention provides a kind of image processing method and terminal, this method comprises: obtaining the first image of subject by first camera, the second image of the subject is obtained by the second camera;The depth information and/or angle information in the first image are obtained, and identifies the background object in the subject according to the depth information and/or the angle information;The background object for including in second image is rejected, target image is generated.The efficiency that background rejecting is carried out to image can be improved in image processing method provided in an embodiment of the present invention.

Description

A kind of image processing method and terminal
Technical field
The present invention relates to field of communication technology more particularly to a kind of image processing method and terminals.
Background technique
With the rapid development of terminal, the function of terminal is more and more diversified.User also more and more starts with end The camera function at end carries out shooting image.It is general to go back after shooting to obtain image by the camera of terminal in practice Need using image processing application program to image carry out background rejecting processing, and by image processing application program to image into It is longer that row background rejects the processing spent time.As it can be seen that the efficiency that present terminal carries out background rejecting to image is lower.
Summary of the invention
The embodiment of the present invention provides a kind of image processing method and terminal, and carrying out background to image to solve present terminal picks The lower problem of the efficiency removed.
In order to solve the above-mentioned technical problem, the present invention is implemented as follows:
In a first aspect, the embodiment of the invention provides a kind of image processing method, it is applied to including the first camera and the The terminal of two cameras, first camera and the second camera are located at the same side of the terminal, the method packet It includes:
The first image that subject is obtained by first camera, by described in second camera acquisition Second image of subject;
The depth information and/or angle information in the first image are obtained, and according to the depth information and/or described Angle information identifies the background object in the subject;
The background object for including in second image is rejected, target image is generated.
Second aspect, the embodiment of the present invention also provide a kind of terminal, comprising: the first camera and second camera, it is described First camera and the second camera are located at the same side of the terminal, the terminal further include:
First obtains module, for obtaining the first image of subject by first camera, by described Second camera obtains the second image of the subject;
Second obtains module, for obtaining depth information and/or angle information in the first image, and according to described Depth information and/or the angle information identify the background object in the subject;
Module is rejected, the background object for that will include in second image is rejected, and target image is generated.
The third aspect, the embodiment of the present invention also provide a kind of mobile terminal, comprising: memory, processor and are stored in institute The computer program that can be run on memory and on the processor is stated, the processor executes real when the computer program Step in existing above-mentioned image processing method.
Fourth aspect, the embodiment of the present invention also provide a kind of computer readable storage medium, the computer-readable storage Computer program is stored on medium, the computer program realizes the step in above-mentioned image processing method when being executed by processor Suddenly.
In embodiments of the present invention, the first image that subject is obtained by first camera, by described Second camera obtains the second image of the subject;Obtain the depth information and/or angle in the first image Information, and identify according to the depth information and/or the angle information background object in the subject;It will be described The background object for including in second image is rejected, and target image is generated.In this way, by depth information in the first image and/or Angle information can go out background object with Direct Recognition, and then the background object in the second image is rejected, to improve rejecting The efficiency of background object in second image.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, needed in being described below to the embodiment of the present invention Attached drawing to be used is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, For those of ordinary skill in the art, without any creative labor, it can also obtain according to these attached drawings Obtain other attached drawings.
Fig. 1 is a kind of flow chart of image processing method provided in an embodiment of the present invention;
Fig. 2 is one of exemplary graph provided in an embodiment of the present invention;
Fig. 3 is the flow chart of another image processing method provided in an embodiment of the present invention;
Fig. 4 is the two of exemplary graph provided in an embodiment of the present invention;
Fig. 5 is a kind of structure chart of terminal provided in an embodiment of the present invention;
Fig. 6 is the structure chart of another terminal provided in an embodiment of the present invention;
Fig. 7 is the structure chart of another terminal provided in an embodiment of the present invention;
Fig. 8 is the structure chart of another terminal provided in an embodiment of the present invention;
Fig. 9 is the structure chart of another terminal provided in an embodiment of the present invention;
Figure 10 is the structure chart of another terminal provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
It is a kind of flow chart of image processing method provided in an embodiment of the present invention, the method application referring to Fig. 1, Fig. 1 In the terminal including the first camera and second camera, first camera and the second camera are located at the terminal The same side, as shown in Figure 1, the described method comprises the following steps:
Step 101, the first image that subject is obtained by first camera, pass through the second camera Obtain the second image of the subject.
Wherein, the first camera can be flying time technology (Time Of Flight, TOF) camera, the second camera shooting Head can be colour imagery shot, and colour imagery shot can also be called RGB camera.
It wherein, may include background object and target object in subject, such as: target object can be face, And background object can be the landscape painting hung on metope and metope.
It should be noted that the first image is to be obtained by first camera for the subject Image in video, second image are in the video obtained by the second camera for the subject Image.
Wherein, when the first image be by the first camera for subject obtain video in image, second When image is the image in the video obtained by second camera for subject, the first image and the second image can be with Be the image that synchronization is directed to that same subject obtains, such as: the first image be terminal by the first camera the The image that a period of time cutting stylus obtains the first scene, then the second image then can pass through second camera cutting stylus at first for terminal The image that above-mentioned first scene is obtained.
In addition, since the first image and the second image are the image in video, in this way, can be by each frame in video After second image rejects background object, then the video for including scene to be synthesized with other is synthesized, due to eliminating background Object so that the synthetic effect of video is more preferable, and synthesizes more efficient.
In present embodiment, the first image is the figure in the video obtained by the first camera for subject Picture, the second image are the image in the video obtained by second camera for subject, it is seen that the present embodiment can be with The rejecting of background object applied to frame image each in video so that the use scope of the present embodiment is wider, using it is upper more Add flexibly.
Depth information and/or angle information in step 102, acquisition the first image, and according to the depth information And/or the angle information identifies the background object in the subject, wherein the angle information are as follows: described to be clapped Take the photograph angle information of each position relative to first camera in object.
Wherein, in the first image depth information not within a preset range, and/or, angle information is greater than predetermined angle threshold value Each position constitute background object.Such as: preset range can be 1 meter to 10 meters, and predetermined angle threshold value can be 45 degree.
It should be noted that preset range also refers to be greater than or equal to the first numerical value, and it is less than or equal to the second number Range between value, and the value of second value is related with the precision of the first camera, the precision of the first camera is higher, then and The value of two numerical value is higher.
In addition, the angle information of each position is specifically as follows in subject: between each position and the first camera Line, and, with reference to the angle between straight line, and refer to straight line be by the first camera, and with the is provided in terminal The vertical straight line of the one side of one camera.Such as: referring to fig. 2, A point is the first camera position in figure, and B is to be taken pair A certain position as in, C are the another location in subject, straight line between A and D where line be with reference to straight line, In, the angle information of B point is the angle where straight line where AB and AD between straight line, and the angle information of C point is straight line where AC Angle where with AD between straight line, and the angle where straight line where AE and AD between straight line can indicate predetermined angle threshold Value, the length of FH and the length of AG can indicate the second value in preset range.
Furthermore it is possible to which individually the angle information according to each position in subject relative to the first camera identifies background Object judges the line between each position and the first camera, whether the angle between reference straight line is less than preset angle Threshold value is spent, the part that each position that above-mentioned angle is greater than predetermined angle threshold value forms is background object;It is of course also possible to individually Background object, the position composition of depth information not within a preset range are identified according to the depth information of each position in subject Part be background object.
It should be noted that can also believe in combination with each position in subject relative to the angle of the first camera The depth information of each position identifies background object in breath and subject.Such as: it can first determine and be located in the first image Then parts of images within the scope of predetermined angle threshold value judges one of depth information in above-mentioned parts of images within a preset range Partial image is target object, then the image in the first image in addition to above-mentioned target object is background object.
Step 103 rejects the background object for including in second image, to generate target image.
It wherein, then can be by the corresponding portion of background object in the second image after identifying the background object in the first image Divide and reject, to generate the target image for only including target object.
Such as: subject includes face and metope, then according to can according in the first image depth information and/or Angle information can identify that the background object in the first image is metope, then can directly pick the metope for including in the second image It removes, to generate the target image for only including face.
In the embodiment of the present invention, above-mentioned terminal can be mobile phone, tablet computer (Tablet Personal Computer), Laptop computer (Laptop Computer), personal digital assistant (Personal Digital Assistant, abbreviation PDA), Mobile Internet access device (Mobile Internet Device, MID) or wearable device (Wearable Device) etc..
In embodiments of the present invention, the first image that subject is obtained by first camera, by described Second camera obtains the second image of the subject;Obtain the depth information and/or angle in the first image Information, and identify according to the depth information and/or the angle information background object in the subject;It will be described The background object for including in second image is rejected, and target image is generated.In this way, by depth information in the first image and/or Angle information can go out background object with Direct Recognition, and then the background object in the second image is rejected, to improve rejecting The efficiency of background object in second image.
It is the flow chart of another image processing method provided in an embodiment of the present invention referring to Fig. 3, Fig. 3.The present embodiment with The main distinction of last embodiment is: can instruct depending on the user's operation and preset preset range or predetermined angle threshold The numerical value of value.As shown in Figure 3, comprising the following steps:
Step 301, display set interface.
Wherein, it when the camera application program that user opens a terminal, and when selection background object rejecting mode, can directly show Show set interface.It certainly, can not when the camera application program that user opens a terminal, and when selection background object rejecting mode Set interface is directly displayed, when receiving the idsplay order of user's input, just shows above-mentioned set interface.Concrete mode is herein Without limitation.
Step 302, the operational order for receiving user, and described in being arranged in the set interface according to the operational order Depth information and/or the angle information.
Wherein, the operational order of user can be pressing instruction, touching instruction or phonetic order etc., and concrete type is herein Without limitation.
Wherein, depth information and/or angle information can be set in set interface, such as: depth information may include The range of depth information, i.e. preset range, angle information may include predetermined angle threshold value.It is, of course, also possible to be directly obtained The picture prestored, and modify the depth information or angle information of the picture.
Such as: it can be shown in set interface in the corresponding line of preset range and the corresponding line of predetermined angle threshold value At least one, user can be by the corresponding line of dragging preset range or the corresponding line of predetermined angle threshold value, to adjust Preset range or predetermined angle threshold value.
In addition, when the corresponding line of preset range and the corresponding line of predetermined angle threshold value are shown in set interface simultaneously When, the line of above two classification can surround a preset pattern.Referring to fig. 4, A, E and G may refer in embodiment illustrated in fig. 2 Definition, preset pattern is what the two kinds of line of AE and AG surrounded in Fig. 4, and the concrete type of preset pattern does not limit herein It is fixed, such as: above-mentioned preset pattern can be the figures such as circular cone, pyramid or hemisphere, and user can drag the line in preset pattern, To achieve the purpose that preset range or predetermined angle threshold value is arranged.In this way, the part in the first image in preset pattern is Target object, the part in the first image not in preset pattern are background object.So as to improve the identification of background object Rate.
Furthermore it is possible to target object and the profile of background object in the first image of label, and by the first image and second Image is stacked together, and then rejects the part of background object in the second image.It is, of course, also possible to save in the first image The depth information of target object, and the color image information for including in the depth information of above-mentioned target object and the second image is folded It is added together, according to algorithm of convex hull (such as Graham scan-line algorithm, Mlekman algorithm), finds the depth of above-mentioned target object Spend the profile that information is formed.Then the corresponding pixel of color image information in the second image in the profile is retained, no The corresponding pixel of color image information in the profile replaces with transparent pixels point, carries on the back in the second image to reach and reject The purpose of scape object.
In addition, the default number of at least one of preset range and predetermined angle threshold value can also be shown in set interface, User can directly input the setting numerical value of preset range or predetermined angle threshold value.
It should be noted that the specific set-up mode of preset range and predetermined angle threshold value is it is not limited here.
It should be noted that step 301 and 302 is optional.
Step 303, the first image that subject is obtained by first camera, pass through the second camera Obtain the second image of the subject.
It wherein, may include background object and target object in subject, such as: target object can be face, And background object can be the landscape painting hung on metope and metope.
It should be noted that the first image is to be obtained by first camera for the subject Image in video, second image are in the video obtained by the second camera for the subject Image.
In addition, the first image is the image in the video obtained by the first camera for subject, the second figure As for by second camera, for the image in the video of subject acquisition, specific statement may refer to last embodiment In statement, and can achieve identical advantageous effects in last embodiment, details are not described herein.
Depth information and/or angle information in step 304, acquisition the first image, and according to the depth information And/or the angle information identifies the background object in the subject, wherein the angle information are as follows: described to be clapped Angle information of each position relative to first camera in object is taken the photograph, the depth information of the background object is not in default model In enclosing, and/or, angle information is greater than predetermined angle threshold value.
Wherein, in the first image depth information not within a preset range, and/or, angle information is greater than predetermined angle threshold value Each position constitute background object.Such as: preset range can be 1 meter to 10 meters, and predetermined angle threshold value can be 45 degree.
It should be noted that preset range also refers to be greater than the first numerical value, and the range being less than between second value, And the value of second value is related with the precision of the first camera, the precision of the first camera is higher, then the value of second value It is higher.
Step 305 rejects the background object for including in second image, generates target image.
Wherein it is possible to which pixel of the background object in the second image is directly deleted.
Optionally, described to reject the background object for including in second image, generate target image, comprising:
Identify pixel of the background object in second image;
The first pixel value is set by pixel corresponding pixel value of the background object in second image, it is raw At the target image.
Wherein, the first pixel value is less than preset threshold, it should be noted that and pixel value can also be called rgb value, and The value of preset threshold it is not limited here, such as: preset threshold can be 0.1.It preferably, can be by background object The corresponding pixel value of pixel in two images is set as 0, and the pixel that above-mentioned pixel value is 0 can be called transparent picture Vegetarian refreshments.
In present embodiment, the first pixel is set by the corresponding pixel value of pixel of the background object in the second image Value equally can achieve the effect that reject background object, so that the step of rejecting background object is easier, it is more efficient.
Wherein, optionally, the corresponding pixel value of pixel by the background object in second image is set It is set to the first pixel value, after generating the target image, the method also includes:
By pixel of the target object in the target image and depth of the target object in the first image Degree information synthesizes the threedimensional model of the target object, wherein the target object is that the back is removed in the subject Object except scape object;
Virtual reality applications program is run, and passes through the virtual of threedimensional model described in the virtual reality applications program construction Practical application scene.
In this way, by building threedimensional model virtual reality applications scene so that application scenarios it is more diversified and Flexibility.
Wherein, optionally, the corresponding pixel value of pixel by the background object in second image is set It is set to the first pixel value, after generating the target image, the method also includes:
Acquisition includes the third image of scene to be synthesized;
The target image is synthesized with the third image.
Scene to be synthesized can be the image of landscape image or some star personality somewhere.
In this way, target image can be synthesized directly with third image, carried out without using image composite application Synthesis, simplifies the operation of user, improves the efficiency of the synthesis of image.
The embodiment of the present invention, by step 301 to 305, user can preset accordingly according to unused usage scenario Preset range or predetermined angle threshold value numerical value, so as to improve reject background object flexibility.
It is the structure chart of terminal provided in an embodiment of the present invention referring to Fig. 5, Fig. 5, is able to achieve a kind of figure in above-described embodiment As the details of processing method, and reach identical effect.Terminal 500 includes: the first camera and second camera, and described first Camera and the second camera are located at the same side of the terminal, as shown in figure 5, terminal 500 further include:
First obtains module 501, for obtaining the first image of subject by first camera, passes through institute State the second image that second camera obtains the subject;
Second obtains module 502, for obtaining depth information and/or angle information in the first image, and according to The depth information and/or the angle information identify the background object in the subject, wherein the angle information Are as follows: angle information of each position relative to the TOF camera, the depth information of the background object in the subject Not within a preset range, and/or, angle information is greater than predetermined angle threshold value;
Module 503 is rejected, the background object for that will include in second image is rejected, and target image is generated.
Optionally, referring to Fig. 6, the rejecting module 503 includes:
Identify submodule 5031, for identification pixel of the background object in second image;
Submodule 5032 is replaced, for the corresponding pixel value of pixel by the background object in second image It is set as the first pixel value, generates the target image, wherein first pixel value is less than preset threshold.
Optionally, referring to Fig. 7, the terminal 500 further include:
Display module 504, for showing set interface;
Setup module 505, for receiving the operational order of user, and according to the operational order in the set interface The depth information and/or the angle information are set.
Optionally, referring to Fig. 8, the terminal 500 further include:
First synthesis module 506, for the pixel and the target object by target object in the target image Depth information in the first image synthesizes the threedimensional model of the target object, wherein the target object is described Object in subject in addition to the background object;
Module 507 is constructed, for running virtual reality applications program, and passes through the virtual reality applications program construction institute State the virtual reality applications scene of threedimensional model.
Optionally, referring to Fig. 9, the terminal 500 further include:
Third obtain module 508, for obtain include scene to be synthesized third image;
Second synthesis module 509, for synthesizing the target image with the third image.
Optionally, the first image is in the video obtained by first camera for the subject Image, second image be by the second camera for the subject obtain video in image.
Terminal 500 is able to achieve each process that terminal is realized in the embodiment of the method for Fig. 1 and Fig. 3, to avoid repeating, here It repeats no more.
A kind of hardware structural diagram of Figure 10 mobile terminal of each embodiment to realize the present invention.
The mobile terminal 1000 includes but is not limited to: radio frequency unit 1001, network module 1002, audio output unit 1003, input unit 1004, sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, storage The components such as device 1009, processor 1010, power supply 1011, the first camera and second camera, and the first camera and second is taken the photograph As head is located at the same side of mobile terminal 1000.It will be understood by those skilled in the art that mobile terminal structure shown in Figure 10 The restriction to mobile terminal is not constituted, mobile terminal may include components more more or fewer than diagram, or combine certain Component or different component layouts.In embodiments of the present invention, mobile terminal includes but is not limited to mobile phone, tablet computer, pen Remember this computer, palm PC, car-mounted terminal, wearable device and pedometer etc..
Wherein, processor 1010 pass through institute for obtaining the first image of subject by first camera State the second image that second camera obtains the subject;
The depth information and/or angle information in the first image are obtained, and according to the depth information and/or described Angle information identifies the background object in the subject, wherein the angle information are as follows: each in the subject Angle information of the position relative to first camera, the depth information of the background object not within a preset range, and/ Or, angle information is greater than predetermined angle threshold value;
The background object for including in second image is rejected, to obtain target image.
Optionally, the described background object for including in second image being picked of the execution of processor 1010 It removes, generates target image, comprising:
Identify pixel of the background object in second image;
The first pixel value is set by pixel corresponding pixel value of the background object in second image, it is raw At the target image, wherein first pixel value is less than preset threshold.
Optionally, the processor 1010 is also used to: described that the of subject is obtained by first camera One image, before the second image that the subject is obtained by the second camera, the method also includes:
Show set interface;
The operational order of user is received, and the depth information is arranged in the set interface according to the operational order And/or the angle information.
Optionally, the processor 1010 is also used to: by pixel pair of the background object in second image The pixel value answered is set as the first pixel value, after generating the target image, the method also includes:
By pixel of the target object in the target image and depth of the target object in the first image Degree information synthesizes the threedimensional model of the target object, wherein the target object is that the back is removed in the subject Object except scape object;
Virtual reality applications program is run, and passes through the virtual of threedimensional model described in the virtual reality applications program construction Practical application scene;
Alternatively,
Acquisition includes the third image of scene to be synthesized;
The target image is synthesized with the third image.
Optionally, the first image is in the video obtained by first camera for the subject Image, second image be by the second camera for the subject obtain video in image.
The efficiency for carrying out background rejecting to image of mobile terminal provided in an embodiment of the present invention is preferable.
It should be understood that the embodiment of the present invention in, radio frequency unit 1001 can be used for receiving and sending messages or communication process in, signal Send and receive, specifically, by from base station downlink data receive after, to processor 1010 handle;In addition, by uplink Data are sent to base station.In general, radio frequency unit 1001 includes but is not limited to antenna, at least one amplifier, transceiver, coupling Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 1001 can also by wireless communication system and network and other Equipment communication.
Mobile terminal provides wireless broadband internet by network module 1002 for user and accesses, and such as user is helped to receive It sends e-mails, browse webpage and access streaming video etc..
Audio output unit 1003 can be received by radio frequency unit 1001 or network module 1002 or in memory The audio data stored in 1009 is converted into audio signal and exports to be sound.Moreover, audio output unit 1003 can be with Audio output relevant to the specific function that mobile terminal 1000 executes is provided (for example, call signal receives sound, message sink Sound etc.).Audio output unit 1003 includes loudspeaker, buzzer and receiver etc..
Input unit 1004 is for receiving audio or video signal.Input unit 1004 may include graphics processor (Graphics Processing Unit, GPU) 10041 and microphone 10042, graphics processor 10041 are captured in video In mode or image capture mode by image capture apparatus (such as camera) obtain static images or video image data into Row processing.Treated, and picture frame may be displayed on display unit 1006.Through treated the picture frame of graphics processor 10041 It can store in memory 1009 (or other storage mediums) or carried out via radio frequency unit 1001 or network module 1002 It sends.Microphone 10042 can receive sound, and can be audio data by such acoustic processing.Audio that treated Data can be converted to the lattice that mobile communication base station can be sent to via radio frequency unit 1001 in the case where telephone calling model Formula output.
Mobile terminal 1000 further includes at least one sensor 1005, for example, optical sensor, motion sensor and other Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ring The light and shade of border light adjusts the brightness of display panel 10061, proximity sensor can when mobile terminal 1000 is moved in one's ear, Close display panel 10061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions The size of (generally three axis) acceleration, can detect that size and the direction of gravity, can be used to identify mobile terminal appearance when static State (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) Deng;Sensor 1005 can also include fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, gas Meter, hygrometer, thermometer, infrared sensor etc. are pressed, details are not described herein.
Display unit 1006 is for showing information input by user or being supplied to the information of user.Display unit 1006 can Including display panel 10061, liquid crystal display (Liquid Crystal Display, LCD), organic light-emitting diodes can be used Forms such as (Organic Light-Emitting Diode, OLED) are managed to configure display panel 10061.
User input unit 1007 can be used for receiving the number or character information of input, and generate the use with mobile terminal Family setting and the related key signals input of function control.Specifically, user input unit 1007 include touch panel 10071 with And other input equipments 10072.Touch panel 10071, also referred to as touch screen collect the touch behaviour of user on it or nearby Make (for example user uses any suitable objects or attachment such as finger, stylus on touch panel 10071 or in touch panel Operation near 10071).Touch panel 10071 may include both touch detecting apparatus and touch controller.Wherein, it touches The touch orientation of detection device detection user is touched, and detects touch operation bring signal, transmits a signal to touch controller; Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 1010, It receives the order that processor 1010 is sent and is executed.Furthermore, it is possible to using resistance-type, condenser type, infrared ray and surface The multiple types such as sound wave realize touch panel 10071.In addition to touch panel 10071, user input unit 1007 can also include Other input equipments 10072.Specifically, other input equipments 10072 can include but is not limited to physical keyboard, function key (ratio Such as volume control button, switch key), trace ball, mouse, operating stick, details are not described herein.
Further, touch panel 10071 can be covered on display panel 10061, when touch panel 10071 detects After touch operation on or near it, processor 1010 is sent to determine the type of touch event, is followed by subsequent processing device 1010 Corresponding visual output is provided on display panel 10061 according to the type of touch event.Although in Figure 10, touch panel 10071 and display panel 10061 are the functions that outputs and inputs of realizing mobile terminal as two independent components, but In some embodiments, touch panel 10071 can be integrated with display panel 10061 and realize outputting and inputting for mobile terminal Function, specifically herein without limitation.
Interface unit 1008 is the interface that external device (ED) is connect with mobile terminal 1000.For example, external device (ED) may include Wired or wireless headphone port, external power supply (or battery charger) port, wired or wireless data port, storage card Port, port, the port audio input/output (I/O), video i/o port, earphone for connecting the device with identification module Port etc..Interface unit 1008 can be used for receiving the input (for example, data information, electric power etc.) from external device (ED) simultaneously And by one or more elements that the input received is transferred in mobile terminal 1000 or it can be used in mobile terminal Data are transmitted between 1000 and external device (ED).
Memory 1009 can be used for storing software program and various data.Memory 1009 can mainly include storage program Area and storage data area, wherein storing program area can application program needed for storage program area, at least one function (such as Sound-playing function, image player function etc.) etc.;Storage data area, which can be stored, uses created data (ratio according to mobile phone Such as audio data, phone directory) etc..In addition, memory 1009 may include high-speed random access memory, it can also include non- Volatile memory, for example, at least a disk memory, flush memory device or other volatile solid-state parts.
Processor 1010 is the control centre of mobile terminal, utilizes each of various interfaces and the entire mobile terminal of connection A part by running or execute the software program and/or module that are stored in memory 1009, and calls and is stored in storage Data in device 1009 execute the various functions and processing data of mobile terminal, to carry out integral monitoring to mobile terminal.Place Managing device 1010 may include one or more processing units;Preferably, processor 1010 can integrate application processor and modulation /demodulation Processor, wherein the main processing operation system of application processor, user interface and application program etc., modem processor master Handle wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 1010.
Mobile terminal 1000 can also include the power supply 1011 (such as battery) powered to all parts, it is preferred that power supply 1011 can be logically contiguous by power-supply management system and processor 1010, to realize that management is filled by power-supply management system The functions such as electricity, electric discharge and power managed.
In addition, mobile terminal 1000 includes some unshowned functional modules, details are not described herein.
Preferably, the embodiment of the present invention also provides a kind of mobile terminal, including processor 1010, memory 1009, storage On memory 1009 and the computer program that can run on the processor 1010, the computer program is by processor 1010 A kind of each process of above-mentioned image processing method embodiment is realized when execution, and can reach identical technical effect, to avoid It repeats, which is not described herein again.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium Calculation machine program, the computer program realize a kind of each process of above-mentioned image processing method embodiment when being executed by processor, And identical technical effect can be reached, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium, Such as read-only memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, letter Claim RAM), magnetic or disk etc..
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do There is also other identical elements in the process, method of element, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art The part contributed out can be embodied in the form of software products, which is stored in a storage medium In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal (can be mobile phone, computer, service Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much Form belongs within protection of the invention.

Claims (12)

1. a kind of image processing method, which is characterized in that described applied to the terminal including the first camera and second camera First camera and the second camera are located at the same side of the terminal, which comprises
The first image that subject is obtained by first camera obtains described clapped by the second camera Take the photograph the second image of object;
The depth information and/or angle information in the first image are obtained, and according to the depth information and/or the angle Information identifies the background object in the subject;
The background object for including in second image is rejected, target image is generated.
2. the method as described in claim 1, which is characterized in that the background object that will include in second image It rejects, generates target image, comprising:
Identify pixel of the background object in second image;
The first pixel value is set by pixel corresponding pixel value of the background object in second image, generates institute State target image, wherein first pixel value is less than preset threshold.
3. the method as described in claim 1, which is characterized in that described to obtain subject by first camera First image, before the second image that the subject is obtained by the second camera, the method also includes:
Show set interface;
Receive the operational order of user, and be arranged in the set interface according to the operational order depth information and/ Or the angle information.
4. method according to claim 2, which is characterized in that the picture by the background object in second image The corresponding pixel value of vegetarian refreshments is set as the first pixel value, after generating the target image, the method also includes:
The depth of pixel of the target object in the target image and the target object in the first image is believed Breath synthesizes the threedimensional model of the target object, wherein the target object is that the background pair is removed in the subject Object as except;
Virtual reality applications program is run, and passes through the virtual reality of threedimensional model described in the virtual reality applications program construction Application scenarios;
Alternatively,
Acquisition includes the third image of scene to be synthesized;
The target image is synthesized with the third image.
5. such as method of any of claims 1-4, which is characterized in that the first image is to take the photograph by described first The image in video obtained as scalp acupuncture to the subject, second image are to be directed to by the second camera The image in video that the subject obtains.
6. a kind of terminal characterized by comprising the first camera and second camera, first camera and described Two cameras are located at the same side of the terminal, the terminal further include:
First obtains module, for obtaining the first image of subject by first camera, passes through described second Camera obtains the second image of the subject;
Second obtains module, for obtaining depth information and/or angle information in the first image, and according to the depth Information and/or the angle information identify the background object in the subject;
Module is rejected, the background object for that will include in second image is rejected, and target image is generated.
7. terminal as claimed in claim 6, which is characterized in that the rejecting module includes:
Identify submodule, for identification pixel of the background object in second image;
Submodule is replaced, for setting for pixel corresponding pixel value of the background object in second image One pixel value generates the target image, wherein first pixel value is less than preset threshold.
8. terminal as claimed in claim 6, which is characterized in that the terminal further include:
Display module, for showing set interface;
Institute is arranged for receiving the operational order of user, and according to the operational order in setup module in the set interface State depth information and/or the angle information.
9. terminal as claimed in claim 7, which is characterized in that the terminal further include:
First synthesis module, for by pixel of the target object in the target image and the target object described the Depth information in one image synthesizes the threedimensional model of the target object, wherein the target object is described is taken pair Object as in addition to the background object;
Module is constructed, for running virtual reality applications program, and passes through three-dimensional described in the virtual reality applications program construction The virtual reality applications scene of model;
Alternatively,
Third obtain module, for obtain include scene to be synthesized third image;
Second synthesis module, for synthesizing the target image with the third image.
10. terminal as claim in any one of claims 6-9, which is characterized in that the first image is to take the photograph by described first The image in video obtained as scalp acupuncture to the subject, second image are to be directed to by the second camera The image in video that the subject obtains.
11. a kind of mobile terminal characterized by comprising memory, processor and be stored on the memory and can be in institute The computer program run on processor is stated, the processor is realized when executing the computer program as in claim 1-5 Step in described in any item image processing methods.
12. a kind of computer readable storage medium, which is characterized in that be stored with computer on the computer readable storage medium Program, the computer program realize image processing method according to any one of claims 1 to 5 when being executed by processor In step.
CN201910294697.0A 2019-04-12 2019-04-12 Image processing method and terminal Active CN110012229B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910294697.0A CN110012229B (en) 2019-04-12 2019-04-12 Image processing method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910294697.0A CN110012229B (en) 2019-04-12 2019-04-12 Image processing method and terminal

Publications (2)

Publication Number Publication Date
CN110012229A true CN110012229A (en) 2019-07-12
CN110012229B CN110012229B (en) 2021-01-08

Family

ID=67171466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910294697.0A Active CN110012229B (en) 2019-04-12 2019-04-12 Image processing method and terminal

Country Status (1)

Country Link
CN (1) CN110012229B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470138A (en) * 2021-06-30 2021-10-01 维沃移动通信有限公司 Image generation method and device, electronic equipment and readable storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102467661A (en) * 2010-11-11 2012-05-23 Lg电子株式会社 Multimedia device and method for controlling the same
US20130235223A1 (en) * 2012-03-09 2013-09-12 Minwoo Park Composite video sequence with inserted facial region
CN105447895A (en) * 2014-09-22 2016-03-30 酷派软件技术(深圳)有限公司 Hierarchical picture pasting method, device and terminal equipment
CN106327445A (en) * 2016-08-24 2017-01-11 王忠民 Image processing method and device, photographic equipment and use method thereof
CN106375662A (en) * 2016-09-22 2017-02-01 宇龙计算机通信科技(深圳)有限公司 Photographing method and device based on double cameras, and mobile terminal
CN107197169A (en) * 2017-06-22 2017-09-22 维沃移动通信有限公司 A kind of high dynamic range images image pickup method and mobile terminal
CN107194963A (en) * 2017-04-28 2017-09-22 努比亚技术有限公司 A kind of dual camera image processing method and terminal
CN107396084A (en) * 2017-07-20 2017-11-24 广州励丰文化科技股份有限公司 A kind of MR implementation methods and equipment based on dual camera
CN108111748A (en) * 2017-11-30 2018-06-01 维沃移动通信有限公司 A kind of method and apparatus for generating dynamic image
CN108322644A (en) * 2018-01-18 2018-07-24 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN108881730A (en) * 2018-08-06 2018-11-23 成都西纬科技有限公司 Image interfusion method, device, electronic equipment and computer readable storage medium
CN109035288A (en) * 2018-07-27 2018-12-18 北京市商汤科技开发有限公司 A kind of image processing method and device, equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102467661A (en) * 2010-11-11 2012-05-23 Lg电子株式会社 Multimedia device and method for controlling the same
US20130235223A1 (en) * 2012-03-09 2013-09-12 Minwoo Park Composite video sequence with inserted facial region
CN105447895A (en) * 2014-09-22 2016-03-30 酷派软件技术(深圳)有限公司 Hierarchical picture pasting method, device and terminal equipment
CN106327445A (en) * 2016-08-24 2017-01-11 王忠民 Image processing method and device, photographic equipment and use method thereof
CN106375662A (en) * 2016-09-22 2017-02-01 宇龙计算机通信科技(深圳)有限公司 Photographing method and device based on double cameras, and mobile terminal
CN107194963A (en) * 2017-04-28 2017-09-22 努比亚技术有限公司 A kind of dual camera image processing method and terminal
CN107197169A (en) * 2017-06-22 2017-09-22 维沃移动通信有限公司 A kind of high dynamic range images image pickup method and mobile terminal
CN107396084A (en) * 2017-07-20 2017-11-24 广州励丰文化科技股份有限公司 A kind of MR implementation methods and equipment based on dual camera
CN108111748A (en) * 2017-11-30 2018-06-01 维沃移动通信有限公司 A kind of method and apparatus for generating dynamic image
CN108322644A (en) * 2018-01-18 2018-07-24 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN109035288A (en) * 2018-07-27 2018-12-18 北京市商汤科技开发有限公司 A kind of image processing method and device, equipment and storage medium
CN108881730A (en) * 2018-08-06 2018-11-23 成都西纬科技有限公司 Image interfusion method, device, electronic equipment and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470138A (en) * 2021-06-30 2021-10-01 维沃移动通信有限公司 Image generation method and device, electronic equipment and readable storage medium
CN113470138B (en) * 2021-06-30 2024-05-24 维沃移动通信有限公司 Image generation method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN110012229B (en) 2021-01-08

Similar Documents

Publication Publication Date Title
CN109218648A (en) A kind of display control method and terminal device
CN109862258A (en) A kind of image display method and terminal device
CN108196740B (en) A kind of icon display method, device and mobile terminal
CN110035227A (en) Special effect display methods and terminal device
CN109151367A (en) A kind of video call method and terminal device
CN109685915A (en) A kind of image processing method, device and mobile terminal
CN109710165A (en) A kind of drawing processing method and mobile terminal
CN109407948A (en) A kind of interface display method and mobile terminal
CN109976629A (en) Image display method, terminal and mobile terminal
CN109523253A (en) A kind of method of payment and device
CN110096203A (en) A kind of screenshot method and mobile terminal
CN109544445A (en) A kind of image processing method, device and mobile terminal
CN109671034A (en) A kind of image processing method and terminal device
CN109448069A (en) A kind of template generation method and mobile terminal
CN109033913A (en) A kind of recognition methods of identification code and mobile terminal
CN107800968B (en) A kind of image pickup method and mobile terminal
CN110022551A (en) A kind of information interacting method and terminal device
CN109639981A (en) A kind of image capturing method and mobile terminal
CN109358913A (en) A kind of the starting method and terminal device of application program
CN108551562A (en) A kind of method and mobile terminal of video communication
CN108305342A (en) A kind of Work attendance method and mobile terminal
CN110012229A (en) A kind of image processing method and terminal
CN110213437A (en) A kind of edit methods and mobile terminal
CN109582266A (en) A kind of display screen operating method and terminal device
CN109164951A (en) Mobile terminal operating method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant