CN103702032A - Image processing method, device and terminal equipment - Google Patents

Image processing method, device and terminal equipment Download PDF

Info

Publication number
CN103702032A
CN103702032A CN201310754238.9A CN201310754238A CN103702032A CN 103702032 A CN103702032 A CN 103702032A CN 201310754238 A CN201310754238 A CN 201310754238A CN 103702032 A CN103702032 A CN 103702032A
Authority
CN
China
Prior art keywords
field picture
screen
screen subregion
subregion
central point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310754238.9A
Other languages
Chinese (zh)
Other versions
CN103702032B (en
Inventor
蔡志刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201310754238.9A priority Critical patent/CN103702032B/en
Publication of CN103702032A publication Critical patent/CN103702032A/en
Application granted granted Critical
Publication of CN103702032B publication Critical patent/CN103702032B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Controls And Circuits For Display Device (AREA)

Abstract

The invention relates to the technical field of image processing. The embodiment of the invention provides an image processing method, a device and terminal equipment. The method comprises the steps of receiving an input event, determining n screen sub-areas from m screen sub-areas contained by a display screen of the terminal equipment, determining definition information of local images which are contained by each frame of image in q fames of images and correspond to the n screen sub-areas, determining a frame of image with the clearest local images as a target image from the q frames of images according to the definition information of the local images contained by each frame of image in q frames of images, and outputting and displaying the target image. By adopting the technical scheme provided by the embodiment of the invention, the refocused display effect of images is realized through simple algorithm processing of multiple frames of images and the calculation time is saved.

Description

Image processing method, device and terminal equipment
Technical field
The embodiment of the present invention relates to technical field of image processing, relates in particular to a kind of image processing method, device and terminal equipment.
Background technology
Along with the function that mobile phone is integrated is enriched constantly, user can use the mobile phone that possesses continuous shooting function to obtain the multiple image of same target object, and wishes to browse to the image that definition is high.
In prior art, conventionally adopt the rearmounted multi-cam of mobile phone to take same target object and obtain multiple image, and can calculate according to the subtense angle opposite sex existing in multiple image the depth map of multiple image, when user browses every two field picture, click interested target subject in two field picture, camera system is according to the view data of depth map and the current two field picture of browsing, the calculating of heavily focusing, make target subject sharpening, to present to the image that user's definition is high, play the display effect of the heavy focus image of image, virtualization processing is carried out in other region except target subject in the current two field picture of browsing simultaneously, play the effect of outstanding target subject.
Yet it is high that the scheme that adopts prior art realizes the algorithm complex of the display effect that image heavily focuses, to system hardware environment for example central processing unit require highly, computing time is longer.
Summary of the invention
The embodiment of the present invention provides a kind of image processing method, device and terminal equipment, for by realizing by multiple image is carried out to simple algorithm process the display effect that image is heavily focused, has saved computing time.
First aspect, the embodiment of the present invention provides a kind of image processing method, comprising:
Receive incoming event;
According to described incoming event, from m the screen subregion that the display screen of terminal equipment comprises, determine n screen subregion, wherein, m is greater than or equal to 2 integer, and n is greater than or equal to 1 and be less than or equal to the integer of m;
Determine the sharpness information of the topography corresponding to described n screen subregion that in q two field picture, every two field picture comprises, wherein, every two field picture in described q two field picture all can show on described display screen, described q two field picture is for size is identical and comprise identical picture material but the image taken according to different shooting focal length, and q is greater than or equal to 2 integer;
According to the sharpness information of the described topography that in described q two field picture, every two field picture comprises, from described q two field picture, determine that a two field picture the most clearly that comprises described topography is target image;
Target image described in output display.
In the possible implementation of the first of first aspect, describedly according to described incoming event, from m the screen subregion that the display screen of terminal equipment comprises, determine n screen subregion, comprising:
According to described incoming event, from the display screen of terminal equipment, determine the central point of selection area and described selection area;
Described m the screen subregion comprising from described display screen according to the central point of described selection area, determine described n screen subregion, described n screen subregion refers to the nearest screen subregion of central point of selection area described in described m screen subregion middle distance.
The implementation possible according to the first of first aspect, in the possible implementation of the second, described according to described incoming event after the display screen of terminal equipment is determined the central point of selection area and described selection area, also comprise:
Determine that the central point of described selection area is with respect to the coordinate information of one on described display screen definite summit A.
The implementation possible according to the second of first aspect, in the third possible implementation, described method also comprises:
Determine that the central point of each the screen subregion in described m the screen subregion that described display screen comprises is with respect to the coordinate information of described summit A;
Described described m the screen subregion comprising from described display screen according to the central point of described selection area, determine described n screen subregion, described n screen subregion refers to the nearest screen subregion of central point of selection area described in described m screen subregion middle distance, comprising:
The central point of each the screen subregion in m the screen subregion comprising with respect to coordinate information and the described display screen of described summit A according to the central point of described selection area is with respect to the coordinate information of the summit A on described display screen, described m the screen subregion comprising from described display screen, determine described n screen subregion, described n screen subregion refers to the nearest screen subregion of central point of selection area described in described m screen subregion middle distance.
According to the third possible implementation of first aspect, in the 4th kind of possible implementation, the central point of each the screen subregion in described m the screen subregion comprising with respect to coordinate information and the described display screen of described summit A according to the central point of described selection area is with respect to the coordinate information of the summit A on described display screen, after determining described n screen subregion described m the screen subregion comprising from described display screen, also comprise:
According to the distance of the central point of each screen subregion and the central point of described selection area in described n screen subregion, determine the weighted value of each screen subregion in described n screen subregion.
According to the 4th of first aspect the kind of possible implementation, in the 5th kind of possible implementation, described method also comprises:
The brightness value corresponding to each the screen subregion in described n screen subregion comprising according to every two field picture in described q two field picture, determines the definition values corresponding to each the screen subregion in described n screen subregion that in described q two field picture, every two field picture comprises;
The sharpness information of the topography corresponding to described n screen subregion that in described definite q two field picture, every two field picture comprises, comprising:
According to every two field picture in described q two field picture, comprise corresponding to the definition values of each the screen subregion in described n screen subregion and the weighted value of described n screen subregion, determine the sharpness information of the topography corresponding to described n screen subregion that in described q two field picture, every two field picture comprises.
According to the 5th of first aspect the kind of possible implementation, in the 6th kind of possible implementation, described method also comprises:
The number of the screen subregion comprising according to described display screen, determine the number of the image region that the every two field picture in described q two field picture comprises, the number of the screen subregion that the number of the image region that the every two field picture in described q two field picture comprises comprises with described display screen is identical, be m, and the screen subregion that the image region that comprises of the every two field picture in described q two field picture and described display screen comprise is one to one;
The described brightness value corresponding to each the screen subregion in described n screen subregion comprising according to every two field picture in described q two field picture, determine the definition values corresponding to each the screen subregion in described n screen subregion that in described q two field picture, every two field picture comprises, comprising:
The brightness value of the image region corresponding to each the screen subregion in described n screen subregion comprising according to every two field picture in described q two field picture, determines the definition values of the image region corresponding to each the screen subregion in described n screen subregion that in described q two field picture, every two field picture comprises;
Described according to every two field picture in described q two field picture, comprise corresponding to the definition values of each the screen subregion in described n screen subregion and the weighted value of described n screen subregion, the sharpness information of determining the topography corresponding to described n screen subregion that in described q two field picture, every two field picture comprises, comprising:
The definition values of the image region corresponding to each the screen subregion in described n screen subregion comprising according to every two field picture in described q two field picture and the weighted value of described n screen subregion, determine the sharpness information of the topography corresponding to described n screen subregion that in described q two field picture, every two field picture comprises.
Second aspect, the embodiment of the present invention provides a kind of image processing apparatus, comprising:
Receiver module, for receiving incoming event;
Screen subregion determination module, for definite n the screen subregion of m screen subregion comprising from the display screen of terminal equipment according to described incoming event, wherein, m is greater than or equal to 2 integer, and n is greater than or equal to 1 and be less than or equal to the integer of m;
Sharpness information determination module, for determining the sharpness information of the topography corresponding to described n screen subregion that the every two field picture of q two field picture comprises, wherein, every two field picture in described q two field picture all can show on described display screen, described q two field picture is for size is identical and comprise identical picture material but the image taken according to different shooting focal length, and q is greater than or equal to 2 integer;
Target image determination module, for the sharpness information of the described topography that comprises according to the every two field picture of described q two field picture, determines that from described q two field picture a two field picture the most clearly that comprises described topography is target image;
Display module, for target image described in output display.
In the possible implementation of the first of second aspect, described screen subregion determination module, specifically for determining the central point of selection area and described selection area according to described incoming event from the display screen of terminal equipment; Described m the screen subregion comprising from described display screen according to the central point of described selection area, determine described n screen subregion, described n screen subregion refers to the nearest screen subregion of central point of selection area described in described m screen subregion middle distance.
The implementation possible according to the first of second aspect, in the possible implementation of the second, described screen subregion determination module, also for described according to described incoming event after the display screen of terminal equipment is determined the central point of selection area and described selection area, determine that the central point of described selection area is with respect to the coordinate information of one on described display screen definite summit A.
The implementation possible according to the second of second aspect, in the third possible implementation, described screen subregion determination module, the central point of each screen subregion of described m the screen subregion also comprising for definite described display screen is with respect to the coordinate information of described summit A;
Described screen subregion determination module, coordinate information specifically for the central point of each the screen subregion in m the screen subregion comprising with respect to coordinate information and the described display screen of described summit A according to the central point of described selection area with respect to the summit A on described display screen, described m the screen subregion comprising from described display screen, determine described n screen subregion, described n screen subregion refers to the nearest screen subregion of central point of selection area described in described m screen subregion middle distance.
According to the third possible implementation of second aspect, in the 4th kind of possible implementation, described device also comprises:
Weighted value determination module, coordinate information for the central point of each the screen subregion at described m the screen subregion comprising with respect to coordinate information and the described display screen of described summit A according to the central point of described selection area with respect to the summit A on described display screen, after determining described n screen subregion described m the screen subregion comprising from described display screen, according to the distance of the central point of each screen subregion and the central point of described selection area in described n screen subregion, determine the weighted value of each screen subregion in described n screen subregion.
According to the 4th of second aspect the kind of possible implementation, in the 5th kind of possible implementation, described device also comprises:
Definition values determination module, for the brightness value corresponding to each the screen subregion in described n screen subregion comprising according to the every two field picture of described q two field picture, determine the definition values corresponding to each the screen subregion in described n screen subregion that in described q two field picture, every two field picture comprises;
Described sharpness information determination module, specifically for according to every two field picture in described q two field picture, comprise corresponding to the definition values of each the screen subregion in described n screen subregion and the weighted value of described n screen subregion, determine the sharpness information of the topography corresponding to described n screen subregion that in described q two field picture, every two field picture comprises.
According to the 5th of second aspect the kind of possible implementation, in the 6th kind of possible implementation, described device also comprises:
Image region number determination module, number for the screen subregion that comprises according to described display screen, determine the number of the image region that the every two field picture in described q two field picture comprises, the number of the screen subregion that the number of the image region that the every two field picture in described q two field picture comprises comprises with described display screen is identical, be m, and the screen subregion that the image region that comprises of the every two field picture in described q two field picture and described display screen comprise is one to one;
Described definition values determination module, specifically for the brightness value of the image region corresponding to each the screen subregion in described n screen subregion that comprises according to every two field picture in described q two field picture, determine the definition values of the image region corresponding to each the screen subregion in described n screen subregion that in described q two field picture, every two field picture comprises;
Described sharpness information determination module, specifically for the definition values of the image region corresponding to each the screen subregion in described n screen subregion that comprises according to every two field picture in described q two field picture and the weighted value of described n screen subregion, determine the sharpness information of the topography corresponding to described n screen subregion that in described q two field picture, every two field picture comprises.
The third aspect, the embodiment of the present invention provides a kind of terminal equipment, and described terminal equipment comprises at least one image processing apparatus as above.
Embodiment of the present invention image processing method, device and terminal equipment, by determining n screen subregion m the screen subregion comprising from display screen according to the incoming event receiving, determine the sharpness information of the topography corresponding to n screen subregion that in q two field picture, every two field picture comprises, according to the sharpness information of the topography that in q two field picture, every two field picture comprises, from q two field picture, determine that a two field picture the most clearly that comprises topography is target image, output display target image.Thereby solved, in prior art, to realize the algorithm complex of the display effect that image heavily focuses high, to system hardware environment for example central processing unit require high, the problem that computing time is longer, realized by multiple image is carried out to simple algorithm process and can realize the display effect that image is heavily focused, saved computing time.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, to the accompanying drawing of required use in embodiment or description of the Prior Art be briefly described below, apparently, accompanying drawing in the following describes is some embodiments of the present invention, for those of ordinary skills, do not paying under the prerequisite of creative work, can also obtain according to these accompanying drawings other accompanying drawing.
The flow chart of the image processing method that Fig. 1 provides for the embodiment of the present invention one;
The flow chart of the image processing method that Fig. 2 A provides for the embodiment of the present invention two;
The schematic diagram of the screen subregion that Fig. 2 B comprises for the embodiment of the present invention two a kind of display screen that provides;
Fig. 3 shows schematic diagram for the structure of the image processing apparatus that the embodiment of the present invention three provides;
The structural representation of a kind of terminal equipment that Fig. 4 provides for the embodiment of the present invention five.
Embodiment
For making object, technical scheme and the advantage of the embodiment of the present invention clearer, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is the present invention's part embodiment, rather than whole embodiment.Embodiment based in the present invention, those of ordinary skills, not making the every other embodiment obtaining under creative work prerequisite, belong to the scope of protection of the invention.
The flow chart of the image processing method that Fig. 1 provides for the embodiment of the present invention one.The method of the present embodiment is applicable to reducing under algorithm complex and the prerequisite of computing time adopting single camera continuous shooting image to carry out image processing, realizes the situation of the display effect that image heavily focuses.The method is carried out by image processing apparatus, and this device is realized in the mode of hardware and/or software conventionally.The method of the present embodiment comprises the steps:
110, receive incoming event.
120, according to incoming event, from m the screen subregion that the display screen of terminal equipment comprises, determine n screen subregion, wherein, m is greater than or equal to 2 integer, and n is greater than or equal to 1 and be less than or equal to the integer of m.
130, determine the sharpness information of the topography corresponding to n screen subregion that in q two field picture, every two field picture comprises, wherein, every two field picture in q two field picture all can show on display screen, q two field picture is for size is identical and comprise identical picture material but the image taken according to different shooting focal length, and q is greater than or equal to 2 integer.
Concrete, described q two field picture can be different by the motor configurations to camera current value (current value changes to higher value from smaller value), then by the variation of current value on described motor, control lens moving, the focusing of camera lens progressively moves to nearest focal position from infinity; After each lens moving, camera system gets a frame RAW DATA image data from transducer; RAW DATA data are processed through ISP module (carrying out vedio noise reduction, color rendition etc.), obtain yuv data.Further, according to the wide height of yuv data (take pixel as unit), laterally, be longitudinally divided into multi-compartment region respectively, crossover location between every two row or two row is also a sub regions simultaneously, the above-mentioned subregion that in described q two field picture, every image comprises can be before the application program operation for taking pictures, to have divided the yuv data region of various resolution sizes, does not need to divide.
140,, according to the sharpness information of the topography that in q two field picture, every two field picture comprises, from q two field picture, determine that a two field picture the most clearly that comprises topography is target image.
150, output display target image.
It should be noted that, be while storing such as JPG form with certain file format at described image, also comprises that the file to comprising target image carries out decompress(ion) between output display target image, obtains target image and by described target image output display.
Noticeable, when with JPG stored in file format image, in described file, also comprise EXIF information, and the sharpness information of the image region that comprises described image in described EXIF information, wherein said EXIF information refers to the extend information in JPG file, for the resolution of document image, the parameters such as time for exposure.
Concrete, m the screen subregion comprising from display screen according to the incoming event receiving, determine n screen subregion, determine the sharpness information of the topography corresponding to n screen subregion that in q two field picture, every two field picture comprises, according to the sharpness information of the topography that in q two field picture, every two field picture comprises, from q two field picture, determine that a two field picture the most clearly that comprises topography is target image, output display target image.
The image processing method that the present embodiment provides, by determining n screen subregion m the screen subregion comprising from display screen according to the incoming event receiving, determine the sharpness information of the topography corresponding to n screen subregion that in q two field picture, every two field picture comprises, according to the sharpness information of the topography that in q two field picture, every two field picture comprises, from q two field picture, determine that a two field picture the most clearly that comprises topography is target image, output display target image.Thereby solved, in prior art, to realize the algorithm complex of the display effect that image heavily focuses high, to system hardware environment for example central processing unit require high, the problem that computing time is longer, realized by multiple image is carried out to simple algorithm process and can realize the display effect that image is heavily focused, saved computing time.
The flow chart of the image processing method that Fig. 2 A provides for the embodiment of the present invention two.With reference to Fig. 2 A, the method for the present embodiment can comprise:
201, receive incoming event.
202, according to incoming event, from the display screen of terminal equipment, determine the central point of selection area and selection area.
203, determine that the central point of selection area is with respect to the coordinate information of one on display screen definite summit A.
204, the m comprising from display screen according to the central point of selection area screen subregion, determine n screen subregion, n screen subregion refers to the nearest screen subregion of central point of m screen subregion middle distance selection area.
For instance, the m comprising from display screen according to the central point of selection area screen subregion, definite n screen subregion can be realized in the following way:
The central point of each the screen subregion in m the screen subregion comprising with respect to coordinate information and the display screen of summit A according to the central point of selection area is with respect to the coordinate information of the summit A on display screen, the m comprising from display screen screen subregion, determine n screen subregion, n screen subregion refers to the nearest screen subregion of central point of m screen subregion middle distance selection area.It should be noted that, the central point of each the screen subregion in m the screen subregion comprising with respect to coordinate information and the display screen of summit A according to the central point of selection area is with respect to the coordinate information of the summit A on display screen, before determining n screen subregion the m comprising from display screen screen subregion, need to determine that the central point of each the screen subregion in m the screen subregion that display screen comprises is with respect to the coordinate information of the summit A on display screen.
Understand for convenience the present embodiment, the m in conjunction with Fig. 2 B, display screen being comprised at this screen subregion is introduced.The m that display screen comprises a screen subregion can be only on the Width of display screen, to divide m the screen subregion obtaining, or only in the short transverse of display screen, divide m the screen subregion obtaining, or display screen is divided in Width and short transverse simultaneously to m the screen subregion obtaining.At this, only in conjunction with Fig. 2 B, introduce m the screen subregion that simultaneously dividing of display screen obtained in Width and short transverse, for example, with reference to Fig. 2 B, the schematic diagram of the screen subregion that Fig. 2 B comprises for the embodiment of the present invention two a kind of display screen that provides.The summit of the display screen in Fig. 2 B is A, B, C and D respectively, and a screen subregion is for example for the lattice being formed by summit B, E, F and G represents.After determining the central point of selection area, can determine that central point is with respect to the coordinate information of summit A, and the central point of each the screen subregion in m screen subregion comprising of display screen is with respect to the coordinate information of the summit A on display screen, and then the central point of each the screen subregion in m the screen subregion that can comprise with respect to coordinate information and the display screen of summit A according to the central point of selection area is with respect to the coordinate information of the summit A on display screen, the m comprising from display screen screen subregion, determine n screen subregion.
The size of each screen subregion as shown in Figure 2 B, if the size of display screen is 1440 pixels on Width, in short transverse, be 648 pixels, 1440 pixels can just be divided exactly by 8, on Width, divide 7 times display screen is divided into equally spaced 8 parts, as shown in Figure 2 B, with 7 solid lines, display screen is divided on Width to equally spaced 8 parts, and each screen subregion is 180 pixels on Width, if the size of display screen is 1440 pixels in short transverse, in short transverse, the height of single lattice is 180 pixels, 1080 pixels can just be divided exactly by 6, in short transverse, divide 5 times display screen is divided into equally spaced 6 parts, as shown in Figure 2 B, with 5 solid lines, display screen is divided in short transverse to equally spaced 6 parts, between each cut-off rule while simultaneously dividing screen subregion in the Width of display screen and short transverse on Width and each cut-off rule in short transverse, there is intersection point, take point and width centered by intersection point as 180 pixels, highly the region as 108 pixels formation is also screen subregion, the screen subregion for example being formed by summit H, I, J and K, the number m of ready-portioned all screen subregions equals 83.
205,, according to the distance of the central point of each screen subregion and the central point of selection area in n screen subregion, determine the weighted value of each screen subregion in n screen subregion.
According to the distance of the center point coordinate of each screen subregion and the central point of selection area in n screen subregion, determine the weighted value of each screen subregion in n screen subregion, the distance of the center point coordinate of screen subregion and the central point of selection area is nearer, and the shared weighted value of this screen subregion is higher.
206, the brightness value corresponding to each the screen subregion in n screen subregion comprising according to every two field picture in q two field picture, determines the definition values corresponding to each the screen subregion in n screen subregion that in q two field picture, every two field picture comprises.
For instance, the brightness value corresponding to each the screen subregion in n screen subregion comprising according to every two field picture in q two field picture, determine that the definition values corresponding to each the screen subregion in n screen subregion that in q two field picture, every two field picture comprises can realize in the following way:
The brightness value of the image region corresponding to each the screen subregion in n screen subregion comprising according to every two field picture in q two field picture, determines the definition values of the image region corresponding to each the screen subregion in n screen subregion that in q two field picture, every two field picture comprises.
It should be noted that, the brightness value of image region can be the Y component data of the YUV video data of every two field picture, determines the definition values of the image region corresponding to each the screen subregion in n screen subregion that in q two field picture, every two field picture comprises according to the Y component data of the YUV video data of every two field picture.
207, according to every two field picture in q two field picture, comprise corresponding to the definition values of each the screen subregion in n screen subregion and the weighted value of n screen subregion, determine the sharpness information of the topography corresponding to n screen subregion that in q two field picture, every two field picture comprises.
For instance, according to every two field picture in q two field picture, comprise corresponding to the definition values of each the screen subregion in n screen subregion and the weighted value of n screen subregion, determine that the sharpness information of the topography corresponding to n screen subregion that in q two field picture, every two field picture comprises can realize in the following way:
The definition values of the image region corresponding to each the screen subregion in n screen subregion comprising according to every two field picture in q two field picture and the weighted value of n screen subregion, determine the sharpness information of the topography corresponding to n screen subregion that in q two field picture, every two field picture comprises.If for example n equals 2, the definition values of 2 image regions corresponding to 2 screen subregions that the two field picture in q two field picture comprises is respectively 5 and 6, and the weighted value of the screen subregion that definition values is 5 is 0.6, definition values is that the weighted value of 6 image region is 0.4, the sharpness information that can determine the topography corresponding to these 2 each and every one screen subregions that this two field picture comprises is 5 to be multiplied by 0.6 and 6 and to be multiplied by 0.4 sum, and sharpness information is 5.4.
Wherein, the number of the screen subregion that the number of image region can comprise according to display screen is determined, concrete, the number of the screen subregion comprising according to display screen is determined the number of the image region that the every two field picture in q two field picture comprises, the number of the screen subregion that the number of the image region that the every two field picture in q two field picture comprises comprises with display screen is identical, be m, and the screen subregion that the image region that comprises of the every two field picture in q two field picture and display screen comprise is one to one.The number of image region equates with the number of screen subregion, and the method according to the screen subregion of the division of introducing in above-mentioned 204, can be divided into every two field picture m image region.The method that every two field picture is divided into m image region is identical with the method for above-mentioned division screen subregion, image region in screen subregion and 207 in 204 wide, although it is not high identical, but the total number of the image region comprising will guarantee identical with the total number of the screen subregion comprising, if simultaneously on Width and when short transverse is all divided, the width that as far as possible guarantees display screen equates with the ratio of the width of single image subregion with the ratio of the width of single screen subregion and the width of every two field picture, the height of display screen equates with the ratio of the height of single image subregion with the ratio of the height of single screen subregion and the height of every two field picture, for example the width of the display screen of 204 illustrated is 1440 pixels, 1440 pixels can just be divided exactly by 8, on Width, the width of single lattice is 180 pixels, the height of display screen is 648 pixels, 648 pixels can just be divided exactly by 6, in short transverse, the height of single lattice is 108 pixels, if every two field picture is 3264 pixels at Width in 206 so, in upper 2448 pixels of short transverse, the width that carries out each image region after dividing for 7 times on Width is 408 pixels, in short transverse, carrying out the height of each image region after dividing for 5 times is 408 pixels, between each cut-off rule on Width and in short transverse, there is intersection point simultaneously, take centered by intersection point point and width is 408 pixels, be highly that the region that 408 pixels form is also image region, the number m of ready-portioned all image regions equals 83.
208,, according to the sharpness information of the topography that in q two field picture, every two field picture comprises, from q two field picture, determine that a two field picture the most clearly that comprises topography is target image.
209, output display target image.
The image processing method that the present embodiment provides, by determining the central point of selection area and selection area according to the incoming event receiving from the display screen of terminal equipment, the m comprising from display screen according to the central point of selection area screen subregion, determine n screen subregion, according to the distance of the central point of each screen subregion and the central point of selection area in n screen subregion, determine the weighted value of each screen subregion in n screen subregion, the brightness value corresponding to each the screen subregion in n screen subregion comprising according to every two field picture in q two field picture, determine the definition values corresponding to each the screen subregion in n screen subregion that in q two field picture, every two field picture comprises, according to every two field picture in q two field picture, comprise corresponding to the definition values of each the screen subregion in n screen subregion and the weighted value of n screen subregion, determine the sharpness information of the topography corresponding to n screen subregion that in q two field picture, every two field picture comprises, according to the sharpness information of the topography that in q two field picture, every two field picture comprises, from q two field picture, determine that a two field picture the most clearly that comprises topography is target image output display target image.Thereby solved, in prior art, to realize the algorithm complex of the display effect that image heavily focuses high, to system hardware environment for example central processing unit require high, the problem that computing time is longer, realized by multiple image is carried out to simple algorithm process and can realize the display effect that image is heavily focused, saved computing time.
The structural representation of the image processing apparatus 300 that Fig. 3 provides for the embodiment of the present invention three.The device of the present embodiment is applicable to reducing under algorithm complex and the prerequisite of computing time adopting single camera continuous shooting image to carry out image processing, realizes the situation of the display effect that image heavily focuses.This device is realized in the mode of hardware and/or software conventionally.With reference to Fig. 3, this comprises as lower module: receiver module 310, screen subregion determination module 320, sharpness information determination module 330, target image determination module 340 and display module 350.
Receiver module 310 is for receiving incoming event; Screen subregion determination module 320 is determined n screen subregion for m the screen subregion comprising from the display screen of terminal equipment according to described incoming event, wherein, m is greater than or equal to 2 integer, and n is greater than or equal to 1 and be less than or equal to the integer of m; Sharpness information determination module 330 is for determining the sharpness information of the topography corresponding to described n screen subregion that the every two field picture of q two field picture comprises, wherein, every two field picture in described q two field picture all can show on described display screen, described q two field picture is for size is identical and comprise identical picture material but the image taken according to different shooting focal length, and q is greater than or equal to 2 integer; Target image determination module 340, for the sharpness information of the described topography that comprises according to the every two field picture of described q two field picture, determines that from described q two field picture a two field picture the most clearly that comprises described topography is target image; Display module 350 is for target image described in output display.
Further, screen subregion determination module 320 is specifically for determining the central point of selection area and described selection area according to described incoming event from the display screen of terminal equipment; Described m the screen subregion comprising from described display screen according to the central point of described selection area, determine described n screen subregion, described n screen subregion refers to the nearest screen subregion of central point of selection area described in described m screen subregion middle distance.
Further, described screen subregion determination module 320 also for described according to described incoming event after the display screen of terminal equipment is determined the central point of selection area and described selection area, determine that the central point of described selection area is with respect to the coordinate information of one on described display screen definite summit A.
The central point of each screen subregion of described m the screen subregion that further, described screen subregion determination module 320 also comprises for definite described display screen is with respect to the coordinate information of described summit A;
Further, described screen subregion determination module 320 is the coordinate information with respect to the summit A on described display screen specifically for the central point of each the screen subregion in m the screen subregion comprising with respect to coordinate information and the described display screen of described summit A according to the central point of described selection area, described m the screen subregion comprising from described display screen, determine described n screen subregion, described n screen subregion refers to the nearest screen subregion of central point of selection area described in described m screen subregion middle distance.
Further, this device also comprises:
Weighted value determination module, coordinate information for the central point of each the screen subregion at described m the screen subregion comprising with respect to coordinate information and the described display screen of described summit A according to the central point of described selection area with respect to the summit A on described display screen, after determining described n screen subregion described m the screen subregion comprising from described display screen, according to the distance of the central point of each screen subregion and the central point of described selection area in described n screen subregion, determine the weighted value of each screen subregion in described n screen subregion.
Further, this device also comprises:
Definition values determination module, for the brightness value corresponding to each the screen subregion in described n screen subregion comprising according to the every two field picture of described q two field picture, determine the definition values corresponding to each the screen subregion in described n screen subregion that in described q two field picture, every two field picture comprises;
Sharpness information determination module 330 specifically for according to every two field picture in described q two field picture, comprise corresponding to the definition values of each the screen subregion in described n screen subregion and the weighted value of described n screen subregion, determine the sharpness information of the topography corresponding to described n screen subregion that in described q two field picture, every two field picture comprises.
Further, this device also comprises:
Image region number determination module, number for the screen subregion that comprises according to described display screen, determine the number of the image region that the every two field picture in described q two field picture comprises, the number of the screen subregion that the number of the image region that the every two field picture in described q two field picture comprises comprises with described display screen is identical, be m, and the screen subregion that the image region that comprises of the every two field picture in described q two field picture and described display screen comprise is one to one;
Definition values determination module, specifically for the brightness value of the image region corresponding to each the screen subregion in described n screen subregion that comprises according to every two field picture in described q two field picture, determine the definition values of the image region corresponding to each the screen subregion in described n screen subregion that in described q two field picture, every two field picture comprises;
Described sharpness information determination module 330, specifically for the definition values of the image region corresponding to each the screen subregion in described n screen subregion that comprises according to every two field picture in described q two field picture and the weighted value of described n screen subregion, is determined the sharpness information of the topography corresponding to described n screen subregion that in described q two field picture, every two field picture comprises.
The image processing apparatus that the present embodiment provides, by determining n screen subregion m the screen subregion comprising from display screen according to the incoming event receiving, determine the sharpness information of the topography corresponding to n screen subregion that in q two field picture, every two field picture comprises, according to the sharpness information of the topography that in q two field picture, every two field picture comprises, from q two field picture, determine that a two field picture the most clearly that comprises topography is target image, output display target image.Thereby solved, in prior art, to realize the algorithm complex of the display effect that image heavily focuses high, to system hardware environment for example central processing unit require high, the problem that computing time is longer, realized by multiple image is carried out to simple algorithm process and can realize the display effect that image is heavily focused, saved computing time.
It should be noted that, the embodiment of the present invention also provides a kind of terminal equipment, and this terminal equipment can comprise the image processing apparatus of mentioning in above-described embodiment.This image processing apparatus can be for the method and technology scheme shown in arbitrary embodiment in execution graph 1 and Fig. 2, and it realizes principle and technique effect is similar, repeats no more herein.
Accordingly, consult accompanying drawing 4, the structural representation of a kind of terminal equipment that Fig. 4 provides for the embodiment of the present invention five, this terminal equipment comprises at least one processor 501, CPU for example, at least one network interface 504, for example physical network card, or other user interface 503, and memory 505 and at least one communication bus 502.
Wherein, communication bus 502 is for realizing the connection communication between these assemblies.
Network interface 504 is for realizing the connection communication between this physical host and network, such as this network interface 504 can be managed for attachment the equipment such as network interface card and/or physical switches.
Optionally, user interface 503, can comprise display, keyboard or other pointing devices, for example, mouse, trace ball (trackball), touch-sensitive plate or touch sensitive display screen etc.
Memory 505 may comprise high random access memory body (RAM, Random Access Memory), also may also comprise non-unsettled memory (non-volatile memory), for example at least one magnetic disc store.Optionally, this memory 505 can also comprise that at least one is positioned at the storage device away from aforementioned processing device 501.
In some embodiments, memory 505 has been stored following element, executable module or data structure, or their subset, or their superset:
Operating system 5051, comprises various system programs, for realizing various basic businesses and processing hardware based task;
Application module 5052, comprises various application programs, for realizing various applied business.
In application module 5052, include but not limited to the various unit relevant to the exchanges data of virtual machine, such as receiving element, dispensing unit, acquiring unit and synthesis unit etc.
Particularly, processor 501, for receiving incoming event; According to described incoming event, from m the screen subregion that the display screen of terminal equipment comprises, determine n screen subregion, wherein, m is greater than or equal to 2 integer, and n is greater than or equal to 1 and be less than or equal to the integer of m; Determine the sharpness information of the topography corresponding to described n screen subregion that in q two field picture, every two field picture comprises, wherein, every two field picture in described q two field picture all can show on described display screen, described q two field picture is for size is identical and comprise identical picture material but the image taken according to different shooting focal length, and q is greater than or equal to 2 integer; According to the sharpness information of the described topography that in described q two field picture, every two field picture comprises, from described q two field picture, determine that a two field picture the most clearly that comprises described topography is target image; Target image described in output display.
Concrete, described processor 501 is specifically for determining the central point of selection area and described selection area according to described incoming event from the display screen of terminal equipment; Described m the screen subregion comprising from described display screen according to the central point of described selection area, determine described n screen subregion, described n screen subregion refers to the nearest screen subregion of central point of selection area described in described m screen subregion middle distance.
Further, described processor 501 also for according to described incoming event after the display screen of terminal equipment is determined the central point of selection area and described selection area, determine that the central point of described selection area is with respect to the coordinate information of one on described display screen definite summit A.
Optionally, described processor 501 also for the central point of each screen subregion of determining described m the screen subregion that described display screen comprises with respect to the coordinate information of described summit A; And described processor 501 specifically for the central point of each the screen subregion in m the screen subregion comprising with respect to coordinate information and the described display screen of described summit A according to the central point of described selection area the coordinate information with respect to the summit A on described display screen, described m the screen subregion comprising from described display screen, determine described n screen subregion, described n screen subregion refers to the nearest screen subregion of central point of selection area described in described m screen subregion middle distance.
It should be noted that, the central point of each the screen subregion in m the screen subregion comprising with respect to coordinate information and the described display screen of described summit A according to the central point of described selection area is with respect to the coordinate information of the summit A on described display screen, after determining described n screen subregion described m the screen subregion comprising from described display screen, described processor 501 is also for according to the distance of the central point of the central point of described n each screen subregion of screen subregion and described selection area, determine the weighted value of each screen subregion in described n screen subregion.
It should be noted that, described processor 501 is the brightness value corresponding to each the screen subregion in described n screen subregion for comprising according to the every two field picture of described q two field picture also, determines the definition values corresponding to each the screen subregion in described n screen subregion that in described q two field picture, every two field picture comprises.And described processor 501 specifically for according to every two field picture in described q two field picture, comprise corresponding to the definition values of each the screen subregion in described n screen subregion and the weighted value of described n screen subregion, determine the sharpness information of the topography corresponding to described n screen subregion that in described q two field picture, every two field picture comprises.
Further, described processor 501 is also for the number of the screen subregion that comprises according to described display screen, determine the number of the image region that the every two field picture in described q two field picture comprises, the number of the screen subregion that the number of the image region that the every two field picture in described q two field picture comprises comprises with described display screen is identical, be m, and the screen subregion that the image region that comprises of the every two field picture in described q two field picture and described display screen comprise is one to one; And described processor 501 is specifically for the brightness value of the image region corresponding to each the screen subregion in described n screen subregion that comprises according to every two field picture in described q two field picture, determines the definition values of the image region corresponding to each the screen subregion in described n screen subregion that in described q two field picture, every two field picture comprises; The definition values of the image region corresponding to each the screen subregion in described n screen subregion comprising according to every two field picture in described q two field picture and the weighted value of described n screen subregion, determine the sharpness information of the topography corresponding to described n screen subregion that in described q two field picture, every two field picture comprises.
Terminal equipment described in the employing embodiment of the present invention, by determining n screen subregion m the screen subregion comprising from display screen according to the incoming event receiving, determine the sharpness information of the topography corresponding to n screen subregion that in q two field picture, every two field picture comprises, according to the sharpness information of the topography that in q two field picture, every two field picture comprises, from q two field picture, determine that a two field picture the most clearly that comprises topography is target image, output display target image.Thereby solved, in prior art, to realize the algorithm complex of the display effect that image heavily focuses high, to system hardware environment for example central processing unit require high, the problem that computing time is longer, realized by multiple image is carried out to simple algorithm process and can realize the display effect that image is heavily focused, saved computing time.
One of ordinary skill in the art will appreciate that: all or part of step that realizes above-mentioned each embodiment of the method can complete by the relevant hardware of program command.Aforesaid program can be stored in a computer read/write memory medium.This program, when carrying out, is carried out the step that comprises above-mentioned each embodiment of the method; And aforesaid storage medium comprises: various media that can be program code stored such as ROM, RAM, magnetic disc or CDs.
Finally it should be noted that: each embodiment, only in order to technical scheme of the present invention to be described, is not intended to limit above; Although the present invention is had been described in detail with reference to aforementioned each embodiment, those of ordinary skill in the art is to be understood that: its technical scheme that still can record aforementioned each embodiment is modified, or some or all of technical characterictic is wherein equal to replacement; And these modifications or replacement do not make the essence of appropriate technical solution depart from the scope of various embodiments of the present invention technical scheme.

Claims (15)

1. an image processing method, is characterized in that, comprising:
Receive incoming event;
According to described incoming event, from m the screen subregion that the display screen of terminal equipment comprises, determine n screen subregion, wherein, m is greater than or equal to 2 integer, and n is greater than or equal to 1 and be less than or equal to the integer of m;
Determine the sharpness information of the topography corresponding to described n screen subregion that in q two field picture, every two field picture comprises, wherein, every two field picture in described q two field picture all can show on described display screen, described q two field picture is for size is identical and comprise identical picture material but the image taken according to different shooting focal length, and q is greater than or equal to 2 integer;
According to the sharpness information of the described topography that in described q two field picture, every two field picture comprises, from described q two field picture, determine that a two field picture the most clearly that comprises described topography is target image;
Target image described in output display.
2. image processing method according to claim 1, is characterized in that, describedly according to described incoming event, from m the screen subregion that the display screen of terminal equipment comprises, determines n screen subregion, comprising:
According to described incoming event, from the display screen of terminal equipment, determine the central point of selection area and described selection area;
Described m the screen subregion comprising from described display screen according to the central point of described selection area, determine described n screen subregion, described n screen subregion refers to the nearest screen subregion of central point of selection area described in described m screen subregion middle distance.
3. image processing method according to claim 2, is characterized in that, described according to described incoming event after the display screen of terminal equipment is determined the central point of selection area and described selection area, also comprise:
Determine that the central point of described selection area is with respect to the coordinate information of one on described display screen definite summit A.
4. image processing method according to claim 3, is characterized in that, described method also comprises:
Determine that the central point of each the screen subregion in described m the screen subregion that described display screen comprises is with respect to the coordinate information of described summit A;
Described described m the screen subregion comprising from described display screen according to the central point of described selection area, determine described n screen subregion, described n screen subregion refers to the nearest screen subregion of central point of selection area described in described m screen subregion middle distance, comprising:
The central point of each the screen subregion in m the screen subregion comprising with respect to coordinate information and the described display screen of described summit A according to the central point of described selection area is with respect to the coordinate information of the summit A on described display screen, described m the screen subregion comprising from described display screen, determine described n screen subregion, described n screen subregion refers to the nearest screen subregion of central point of selection area described in described m screen subregion middle distance.
5. image processing method according to claim 4, the central point of each the screen subregion in described m the screen subregion comprising with respect to coordinate information and the described display screen of described summit A according to the central point of described selection area is with respect to the coordinate information of the summit A on described display screen, after determining described n screen subregion described m the screen subregion comprising from described display screen, also comprise:
According to the distance of the central point of each screen subregion and the central point of described selection area in described n screen subregion, determine the weighted value of each screen subregion in described n screen subregion.
6. image processing method according to claim 5, is characterized in that, described method also comprises:
The brightness value corresponding to each the screen subregion in described n screen subregion comprising according to every two field picture in described q two field picture, determines the definition values corresponding to each the screen subregion in described n screen subregion that in described q two field picture, every two field picture comprises;
The sharpness information of the topography corresponding to described n screen subregion that in described definite q two field picture, every two field picture comprises, comprising:
According to every two field picture in described q two field picture, comprise corresponding to the definition values of each the screen subregion in described n screen subregion and the weighted value of described n screen subregion, determine the sharpness information of the topography corresponding to described n screen subregion that in described q two field picture, every two field picture comprises.
7. image processing method according to claim 6, is characterized in that, described method also comprises:
The number of the screen subregion comprising according to described display screen, determine the number of the image region that the every two field picture in described q two field picture comprises, the number of the screen subregion that the number of the image region that the every two field picture in described q two field picture comprises comprises with described display screen is identical, be m, and the screen subregion that the image region that comprises of the every two field picture in described q two field picture and described display screen comprise is one to one;
The described brightness value corresponding to each the screen subregion in described n screen subregion comprising according to every two field picture in described q two field picture, determine the definition values corresponding to each the screen subregion in described n screen subregion that in described q two field picture, every two field picture comprises, comprising:
The brightness value of the image region corresponding to each the screen subregion in described n screen subregion comprising according to every two field picture in described q two field picture, determines the definition values of the image region corresponding to each the screen subregion in described n screen subregion that in described q two field picture, every two field picture comprises;
Described according to every two field picture in described q two field picture, comprise corresponding to the definition values of each the screen subregion in described n screen subregion and the weighted value of described n screen subregion, the sharpness information of determining the topography corresponding to described n screen subregion that in described q two field picture, every two field picture comprises, comprising:
The definition values of the image region corresponding to each the screen subregion in described n screen subregion comprising according to every two field picture in described q two field picture and the weighted value of described n screen subregion, determine the sharpness information of the topography corresponding to described n screen subregion that in described q two field picture, every two field picture comprises.
8. an image processing apparatus, is characterized in that, comprising:
Receiver module, for receiving incoming event;
Screen subregion determination module, for definite n the screen subregion of m screen subregion comprising from the display screen of terminal equipment according to described incoming event, wherein, m is greater than or equal to 2 integer, and n is greater than or equal to 1 and be less than or equal to the integer of m;
Sharpness information determination module, for determining the sharpness information of the topography corresponding to described n screen subregion that the every two field picture of q two field picture comprises, wherein, every two field picture in described q two field picture all can show on described display screen, described q two field picture is for size is identical and comprise identical picture material but the image taken according to different shooting focal length, and q is greater than or equal to 2 integer;
Target image determination module, for the sharpness information of the described topography that comprises according to the every two field picture of described q two field picture, determines that from described q two field picture a two field picture the most clearly that comprises described topography is target image;
Display module, for target image described in output display.
9. image processing apparatus according to claim 8, is characterized in that:
Described screen subregion determination module, specifically for determining the central point of selection area and described selection area according to described incoming event from the display screen of terminal equipment; Described m the screen subregion comprising from described display screen according to the central point of described selection area, determine described n screen subregion, described n screen subregion refers to the nearest screen subregion of central point of selection area described in described m screen subregion middle distance.
10. image processing apparatus according to claim 9, is characterized in that:
Described screen subregion determination module, also for described according to described incoming event after the display screen of terminal equipment is determined the central point of selection area and described selection area, determine that the central point of described selection area is with respect to the coordinate information of one on described display screen definite summit A.
11. image processing apparatus according to claim 10, is characterized in that:
Described screen subregion determination module, the central point of each screen subregion of described m the screen subregion also comprising for definite described display screen is with respect to the coordinate information of described summit A;
Described screen subregion determination module, coordinate information specifically for the central point of each the screen subregion in m the screen subregion comprising with respect to coordinate information and the described display screen of described summit A according to the central point of described selection area with respect to the summit A on described display screen, described m the screen subregion comprising from described display screen, determine described n screen subregion, described n screen subregion refers to the nearest screen subregion of central point of selection area described in described m screen subregion middle distance.
12. image processing apparatus according to claim 11, described device also comprises:
Weighted value determination module, coordinate information for the central point of each the screen subregion at described m the screen subregion comprising with respect to coordinate information and the described display screen of described summit A according to the central point of described selection area with respect to the summit A on described display screen, after determining described n screen subregion described m the screen subregion comprising from described display screen, according to the distance of the central point of each screen subregion and the central point of described selection area in described n screen subregion, determine the weighted value of each screen subregion in described n screen subregion.
13. image processing apparatus according to claim 12, is characterized in that, described device also comprises:
Definition values determination module, for the brightness value corresponding to each the screen subregion in described n screen subregion comprising according to the every two field picture of described q two field picture, determine the definition values corresponding to each the screen subregion in described n screen subregion that in described q two field picture, every two field picture comprises;
Described sharpness information determination module, specifically for according to every two field picture in described q two field picture, comprise corresponding to the definition values of each the screen subregion in described n screen subregion and the weighted value of described n screen subregion, determine the sharpness information of the topography corresponding to described n screen subregion that in described q two field picture, every two field picture comprises.
14. image processing apparatus according to claim 13, is characterized in that, described device also comprises:
Image region number determination module, number for the screen subregion that comprises according to described display screen, determine the number of the image region that the every two field picture in described q two field picture comprises, the number of the screen subregion that the number of the image region that the every two field picture in described q two field picture comprises comprises with described display screen is identical, be m, and the screen subregion that the image region that comprises of the every two field picture in described q two field picture and described display screen comprise is one to one;
Described definition values determination module, specifically for the brightness value of the image region corresponding to each the screen subregion in described n screen subregion that comprises according to every two field picture in described q two field picture, determine the definition values of the image region corresponding to each the screen subregion in described n screen subregion that in described q two field picture, every two field picture comprises;
Described sharpness information determination module, specifically for the definition values of the image region corresponding to each the screen subregion in described n screen subregion that comprises according to every two field picture in described q two field picture and the weighted value of described n screen subregion, determine the sharpness information of the topography corresponding to described n screen subregion that in described q two field picture, every two field picture comprises.
15. 1 kinds of terminal equipments, is characterized in that, described terminal equipment comprises at least one image processing apparatus as described in claim 8 to 14.
CN201310754238.9A 2013-12-31 2013-12-31 Image processing method, device and terminal equipment Active CN103702032B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310754238.9A CN103702032B (en) 2013-12-31 2013-12-31 Image processing method, device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310754238.9A CN103702032B (en) 2013-12-31 2013-12-31 Image processing method, device and terminal equipment

Publications (2)

Publication Number Publication Date
CN103702032A true CN103702032A (en) 2014-04-02
CN103702032B CN103702032B (en) 2017-04-12

Family

ID=50363420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310754238.9A Active CN103702032B (en) 2013-12-31 2013-12-31 Image processing method, device and terminal equipment

Country Status (1)

Country Link
CN (1) CN103702032B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104104869A (en) * 2014-06-25 2014-10-15 华为技术有限公司 Photographing method and device and electronic equipment
CN104506767A (en) * 2014-11-27 2015-04-08 惠州Tcl移动通信有限公司 Method for generating different focal lengths of same scene by using continuous movement of motor and terminal
CN104680478A (en) * 2015-02-15 2015-06-03 青岛海信移动通信技术股份有限公司 Selection method and device for target image data
CN105721768A (en) * 2014-12-19 2016-06-29 汤姆逊许可公司 Method and apparatus for generating adapted slice image from focal stack
CN106973219A (en) * 2017-02-21 2017-07-21 苏州科达科技股份有限公司 A kind of auto focusing method and device based on area-of-interest
CN107105150A (en) * 2016-02-23 2017-08-29 中兴通讯股份有限公司 A kind of method, photographic method and its corresponding intrument of selection photo to be output
CN109005339A (en) * 2018-07-27 2018-12-14 努比亚技术有限公司 A kind of image-pickup method, terminal and storage medium
CN109348114A (en) * 2018-11-26 2019-02-15 Oppo广东移动通信有限公司 Imaging device and electronic equipment
CN110278383A (en) * 2019-07-25 2019-09-24 浙江大华技术股份有限公司 Focus method, device and electronic equipment, storage medium
CN110287826A (en) * 2019-06-11 2019-09-27 北京工业大学 A kind of video object detection method based on attention mechanism

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101150667A (en) * 2006-09-22 2008-03-26 奥林巴斯映像株式会社 Imaging apparatus
WO2009044316A1 (en) * 2007-10-03 2009-04-09 Koninklijke Philips Electronics N.V. System and method for real-time multi-slice acquisition and display of medical ultrasound images
CN101674403A (en) * 2008-09-09 2010-03-17 佳能株式会社 Image pickup apparatus and control method
CN102915188A (en) * 2011-08-01 2013-02-06 中国移动通信集团公司 Method and device for controlling display status of terminal screen

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101150667A (en) * 2006-09-22 2008-03-26 奥林巴斯映像株式会社 Imaging apparatus
WO2009044316A1 (en) * 2007-10-03 2009-04-09 Koninklijke Philips Electronics N.V. System and method for real-time multi-slice acquisition and display of medical ultrasound images
CN101674403A (en) * 2008-09-09 2010-03-17 佳能株式会社 Image pickup apparatus and control method
CN102915188A (en) * 2011-08-01 2013-02-06 中国移动通信集团公司 Method and device for controlling display status of terminal screen

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104104869A (en) * 2014-06-25 2014-10-15 华为技术有限公司 Photographing method and device and electronic equipment
WO2015196802A1 (en) * 2014-06-25 2015-12-30 华为技术有限公司 Photographing method and apparatus, and electronic device
CN104506767A (en) * 2014-11-27 2015-04-08 惠州Tcl移动通信有限公司 Method for generating different focal lengths of same scene by using continuous movement of motor and terminal
CN104506767B (en) * 2014-11-27 2019-08-02 惠州Tcl移动通信有限公司 The method and terminal of same scenery different focal length are generated using motor continuous moving
CN105721768A (en) * 2014-12-19 2016-06-29 汤姆逊许可公司 Method and apparatus for generating adapted slice image from focal stack
US9769379B2 (en) 2015-02-15 2017-09-19 Hisense Mobile Communications Technology Co., Ltd. Method and apparatus for selecting target image
CN104680478B (en) * 2015-02-15 2018-08-21 青岛海信移动通信技术股份有限公司 A kind of choosing method and device of destination image data
CN104680478A (en) * 2015-02-15 2015-06-03 青岛海信移动通信技术股份有限公司 Selection method and device for target image data
WO2017143654A1 (en) * 2016-02-23 2017-08-31 中兴通讯股份有限公司 Method for selecting photo to be outputted, photographing method, device and storage medium
CN107105150A (en) * 2016-02-23 2017-08-29 中兴通讯股份有限公司 A kind of method, photographic method and its corresponding intrument of selection photo to be output
CN106973219A (en) * 2017-02-21 2017-07-21 苏州科达科技股份有限公司 A kind of auto focusing method and device based on area-of-interest
WO2018153149A1 (en) * 2017-02-21 2018-08-30 苏州科达科技股份有限公司 Automatic focusing method and apparatus based on region of interest
CN106973219B (en) * 2017-02-21 2019-06-28 苏州科达科技股份有限公司 A kind of auto focusing method and device based on area-of-interest
US11050922B2 (en) 2017-02-21 2021-06-29 Suzhou Keda Technology Co., Ltd. Automatic focusing method and apparatus based on region of interest
CN109005339B (en) * 2018-07-27 2020-07-28 努比亚技术有限公司 Image acquisition method, terminal and storage medium
CN109005339A (en) * 2018-07-27 2018-12-14 努比亚技术有限公司 A kind of image-pickup method, terminal and storage medium
CN109348114A (en) * 2018-11-26 2019-02-15 Oppo广东移动通信有限公司 Imaging device and electronic equipment
CN110287826A (en) * 2019-06-11 2019-09-27 北京工业大学 A kind of video object detection method based on attention mechanism
CN110278383A (en) * 2019-07-25 2019-09-24 浙江大华技术股份有限公司 Focus method, device and electronic equipment, storage medium

Also Published As

Publication number Publication date
CN103702032B (en) 2017-04-12

Similar Documents

Publication Publication Date Title
CN103702032A (en) Image processing method, device and terminal equipment
JP6479142B2 (en) Image identification and organization according to layout without user intervention
US10134165B2 (en) Image distractor detection and processing
CN107409166B (en) Automatic generation of panning shots
US20160063672A1 (en) Electronic device and method for generating thumbnail picture
EP3545686B1 (en) Methods and apparatus for generating video content
CN108228050B (en) Picture scaling method and device and electronic equipment
CN103826064A (en) Image processing method, device and handheld electronic equipment
CN111258519B (en) Screen split implementation method, device, terminal and medium
CN104360847A (en) Method and equipment for processing image
WO2013179560A1 (en) Image processing device and image processing method
CN104754223A (en) Method for generating thumbnail and shooting terminal
CN107103890A (en) Display application on fixed-direction display
CN103700062A (en) Image processing method and device
CN112019891B (en) Multimedia content display method and device, terminal and storage medium
JP2022500792A (en) Image processing methods and devices, electronic devices and storage media
CN107766703B (en) Watermark adding processing method and device and client
CN109791703B (en) Generating three-dimensional user experience based on two-dimensional media content
CN102568443B (en) Digital image scaling method
US20130236117A1 (en) Apparatus and method for providing blurred image
EP3151243B1 (en) Accessing a video segment
CN110602410A (en) Image processing method and device, aerial camera and storage medium
CN113810755B (en) Panoramic video preview method and device, electronic equipment and storage medium
US9723216B2 (en) Method and system for generating an image including optically zoomed and digitally zoomed regions
CN111179166B (en) Image processing method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant