CN105227867A - A kind of image processing method and electronic equipment - Google Patents

A kind of image processing method and electronic equipment Download PDF

Info

Publication number
CN105227867A
CN105227867A CN201510583839.7A CN201510583839A CN105227867A CN 105227867 A CN105227867 A CN 105227867A CN 201510583839 A CN201510583839 A CN 201510583839A CN 105227867 A CN105227867 A CN 105227867A
Authority
CN
China
Prior art keywords
image
area image
current frame
area
frame image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510583839.7A
Other languages
Chinese (zh)
Inventor
陈文辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201510583839.7A priority Critical patent/CN105227867A/en
Publication of CN105227867A publication Critical patent/CN105227867A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a kind of image processing method and electronic equipment, described method comprises: carrying out image acquisition region, in image acquisition process, from current frame image, determining destination object; Determine the first area image of described destination object in described current frame image; Described first area image is processed, obtains second area image; Described second area image and described current frame image are synthesized, obtains synthetic frame image.Method provided by the invention exists in prior art for solving, in the process of photographic images with process image, heavier technical problem born by the processor of electronic equipment, achieves at photographic images and processes in the process of image, alleviating the technique effect of the processor burden of electronic equipment.

Description

A kind of image processing method and electronic equipment
Technical field
The present invention relates to electronic technology field, particularly a kind of image processing method and electronic equipment.
Background technology
Along with the fast development of digital image acquisition technology, not only the image capture device such as digital camera, digital camera is popularized, and such as the handheld terminal such as smart mobile phone, panel computer also has good image collecting function.
User is when taking pictures or record a video, and original image possibly cannot present the effect that user wants, and therefore user can carry out some adjustment to image usually; such as: adjustment brightness; add filter or image is amplified, then intercepting a part wherein, with bust shot main body etc.
But present inventor, in the process realizing invention technical scheme in the embodiment of the present application, finds that above-mentioned technology at least exists following technical problem:
If the frequent photographic images of user, and adjust image present effect, on the one hand, the workload of electronic equipment can be increased, and then increase the burden of processor; On the other hand, in electronic equipment, need the image after storing original image and adjustment, too increase the storage burden of electronic equipment.
Summary of the invention
The embodiment of the present invention provides a kind of image processing method and electronic equipment, exists in prior art for solving, and in the process of photographic images and process image, the processor of electronic equipment bears heavier technical problem.
On the one hand, the embodiment of the present application provides a kind of image processing method, comprising:
Carrying out image acquisition region, in image acquisition process, from current frame image, determining destination object;
Determine the first area image of described destination object in described current frame image;
Described first area image is processed, obtains second area image;
Described second area image and described current frame image are synthesized, obtains synthetic frame image.Optionally, determine the first area image of described destination object in described current frame image, comprising:
Calculate the outline of described destination object, determine that the image in described outline is described first area image.
Optionally, described first area image is processed, comprising:
Described first area image is moved; Or
Convergent-divergent is carried out to described first area image; Or
Picture editting is carried out to described first area image.
Optionally, described first area image is processed, obtains second area image, comprising:
Described first area image is zoomed in or out according to a scaling, obtains described second area image.
Optionally, zoomed in or out by described first area image according to a scaling, before obtaining second area image, described method also comprises:
Determine the central point of described first area image;
Multiple marginal point is chosen from the edge of described first area image;
Obtain first distance of described central point to the first marginal point in described multiple marginal point, and point at described central point on the direction of described first marginal point, described central point is to the second distance at the edge of described display unit;
Determine the ratio of described second distance and described first distance, and then obtain and described multiple marginal point multiple ratio one to one, minimum ratio in described multiple ratio is defined as described scaling.
Optionally, described first area image is zoomed in or out according to a scaling, obtains second area image, comprising:
According to described scaling, described first area image is zoomed in or out, obtain described second area image; Wherein, the resolution of described second area image is identical with the resolution of described first area image.
Optionally, described second area image and described current frame image are synthesized, obtain synthetic frame image, comprising:
Obtain central point and the primary importance of described central point in described current frame image of described first area image;
Determine the second place of described central point in described second area image;
Described second area image and described current frame image are synthesized, to make described primary importance and described second place overlap, obtains described synthetic frame image.
Optionally, described first area image is moved, comprising:
Primary importance is determined from described current frame image;
Described first area image is moved to described primary importance from the second place at image place, described first area.
Optionally, described method also comprises:
Carrying out in video process to image acquisition region, from other two field pictures, determining described destination object;
Record to the processing rule that described first area image processes in described current frame image, and according to described processing rule, the area image at destination object place in other two field pictures described is processed.
Optionally, record to the processing rule that described first area image processes in described current frame image, and according to described processing rule, the area image at destination object place in other two field pictures described processed, comprising:
Record scaling when convergent-divergent carries out to described first area image;
According to described scaling, convergent-divergent is carried out to the area image at described destination object place.
Optionally, record to the processing rule that described first area image processes in described current frame image, and according to described processing rule, the area image at destination object place in other two field pictures described processed, comprising:
Record the position of described second area image in described synthetic frame image, or described second area image in described synthetic frame image with the relative position relation of at least one other object;
According to described position or described relative position relation, the area image at described destination object place is moved.
Optionally, record to the processing rule that described first area image processes in described current frame image, and according to described processing rule, the area image at destination object place in other two field pictures described processed, comprising:
Record edit effect when described first area image being edited in described current frame image;
According to described edit effect, the area image at described destination object place is edited.
Optionally, from current frame image, determine destination object, comprising:
Object Intelligent Recognition is carried out to described current frame image, from described current frame image, determines the first object.
Optionally, from current frame image, determine destination object, comprising:
The First Eigenvalue obtaining the first object on described current frame image and the default characteristic value pre-set;
Judge whether described the First Eigenvalue mates with described default characteristic value;
If described the First Eigenvalue and described default characteristic matching, determine that described first object is described destination object.
On the other hand, the embodiment of the present application also provides a kind of electronic equipment, comprising:
Image acquisition units;
Processor, is connected with described image acquisition units, and described processor is used for: carrying out image acquisition region, in image acquisition process, from current frame image, determining destination object; Determine the first area image of described destination object in described current frame image; Described first area image is processed, obtains second area image; Described second area image and described current frame image are synthesized, obtains synthetic frame image.
Optionally, described processor specifically for:
Calculate the outline of described destination object, determine that the image in described outline is described first area image.
Optionally, described processor specifically for:
Described first area image is moved; Or
Convergent-divergent is carried out to described first area image; Or
Picture editting is carried out to described first area image.
Optionally, described processor specifically for:
Described first area image is zoomed in or out according to a scaling, obtains described second area image.
Optionally, described processor also for:
Described first area image is being zoomed in or out according to a scaling, before obtaining second area image, is determining the central point of described first area image;
Multiple marginal point is chosen from the edge of described first area image;
Obtain first distance of described central point to the first marginal point in described multiple marginal point, and point at described central point on the direction of described first marginal point, described central point is to the second distance at the edge of described display unit;
Determine the ratio of described second distance and described first distance, and then obtain and described multiple marginal point multiple ratio one to one, minimum ratio in described multiple ratio is defined as described scaling.
Optionally, described processor specifically for:
According to described scaling, described first area image is zoomed in or out, obtain described second area image; Wherein, the resolution of described second area image is identical with the resolution of described first area image.
Optionally, described processor specifically for:
Obtain central point and the primary importance of described central point in described current frame image of described first area image;
Determine the second place of described central point in described second area image;
Described second area image and described current frame image are synthesized, to make described primary importance and described second place overlap, obtains described synthetic frame image.
Optionally, described processor specifically for:
Primary importance is determined from described current frame image;
Described first area image is moved to described primary importance from the second place at image place, described first area.
Optionally, described processor also for:
Carrying out in video process to image acquisition region, from other two field pictures, determining described destination object;
Record to the processing rule that described first area image processes in described current frame image, and according to described processing rule, the area image at destination object place in other two field pictures described is processed.
Optionally, described processor specifically for:
Record scaling when convergent-divergent carries out to described first area image;
According to described scaling, convergent-divergent is carried out to the area image at described destination object place.
Optionally, described processor specifically for:
Record the position of described second area image in described synthetic frame image, or described second area image in described synthetic frame image with the relative position relation of at least one other object;
According to described position or described relative position relation, the area image at described destination object place is moved.
Optionally, described processor specifically for:
Record edit effect when described first area image being edited in described current frame image;
According to described edit effect, the area image at described destination object place is edited.
Optionally, described processor specifically for:
Object Intelligent Recognition is carried out to described current frame image, from described current frame image, determines the first object.
Optionally, described processor specifically for:
The First Eigenvalue obtaining the first object on described current frame image and the default characteristic value pre-set;
Judge whether described the First Eigenvalue mates with described default characteristic value;
If described the First Eigenvalue and described default characteristic matching, determine that described first object is described destination object.
On the other hand, the embodiment of the present application also provides a kind of electronic equipment, comprising:
Destination object determining unit, for carrying out image acquisition region, in image acquisition process, determining destination object from current frame image;
First area image determination unit, for determining the first area image of described destination object in described current frame image;
First area graphics processing unit, for processing described first area image, obtains second area image;
Synthesis unit, for described second area image and described current frame image being synthesized, obtains synthetic frame image.
Above-mentioned one or more technical scheme in the embodiment of the present application, at least has one or more technique effects following:
In the scheme of 1, the embodiment of the present application, carrying out in image acquisition process to image acquisition region, destination object is determined from current target image, and determine the first area image at destination object place, then first area image is processed, obtain second area image, finally, second area image and current frame image are synthesized, obtains synthetic frame image.Visible, in the scheme of the embodiment of the present application, whole image is not processed, but determine the destination object needing to carry out processing, and the first area image at destination object place is processed, thus reduce the burden of electronic equipment, and then alleviate and to exist in prior art, in the process of photographic images with process image, heavier technical problem born by the processor of electronic equipment, achieve at photographic images and process in the process of image, alleviating the technique effect of the processor burden of electronic equipment.
In the scheme of 2, the embodiment of the present application, by the process of IMAQ, original image is processed, obtain synthetic frame image, therefore, in the electronic device, only need the synthetic frame image that specimens preserving obtains, do not need to preserve original image, thus reduce the storage burden of electronic equipment.
In the scheme of 3, the embodiment of the present application, when processing described first area image, first area image can be zoomed in or out according to a scaling, thus obtain second area image, then, second area image and current frame image are synthesized.In prior art, because the shooting body is too little, or background is too large, can cause between the shooting body and background out of proportion.If bust shot main body, then can not comprise complete background image in image; If include complete background image, can reduce the shooting body, both of these case all can the patterning effect of effect diagram picture.By the scheme in the embodiment of the present application, bust shot main body, and keep background constant, both highlighted the shooting body, complete background image can be comprised again, and then the patterning effect of optimized image.
4, in the scheme of the embodiment of the present application, by determining the central point of first area image, central point is to the distance of first area image up contour point, and on the direction pointing to the first marginal point at central point central point to the second distance at the edge of display unit, then, determine the ratio of second distance and the first distance, and then obtain and multiple marginal point multiple ratio one to one, and ratio minimum in multiple ratio is defined as scaling, thus automatically determine the magnification ratio of first area image, and the first area image after amplifying can not exceed the edge of display unit, and then the patterning effect of optimized image.
In the scheme of 5, the embodiment of the present application, after first area image being reduced or amplifies, the resolution controlling first area image is constant, thus after first area image and current frame image are synthesized, avoid the problem that first area image is different with the resolution of other area images in current frame image except the image of first area, the display effect of optimized image.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of image processing method in the embodiment of the present application;
Fig. 2 is the schematic flow sheet determining scaling in the embodiment of the present application;
Fig. 3 is the schematic diagram of the central point of first area image in the embodiment of the present application;
Fig. 4 is the concrete methods of realizing schematic flow sheet of S13 in the embodiment of the present application;
Fig. 5 is the method flow schematic diagram carried out image acquisition region in the embodiment of the present application in video process;
Fig. 6 is the structured flowchart of electronic equipment in the embodiment of the present application;
Fig. 7 is the structured flowchart of another electronic equipment in the embodiment of the present application.
Embodiment
In the technical scheme that the embodiment of the present application provides, by processing the first area image at destination object place in current frame image, thus reduce the burden of electronic equipment, and then alleviate and to exist in prior art, in the process of photographic images with process image, heavier technical problem born by the processor of electronic equipment, achieves at photographic images and processes in the process of image, alleviating the technique effect of the processor burden of electronic equipment.
Below in conjunction with accompanying drawing, the main of the embodiment of the present application technical scheme is realized principle, embodiment and be explained in detail the beneficial effect that should be able to reach.
As shown in Figure 1, the embodiment of the present application provides a kind of image processing method, and can be applied in the electronic equipments such as digital camera, digital camera, smart mobile phone or panel computer, described method comprises:
S10: carrying out image acquisition region, in image acquisition process, determining destination object from current frame image.
S11: determine the first area image of described destination object in described current frame image.
S12: process described first area image, obtains second area image.
S13: described second area image and described current frame image are synthesized, obtains synthetic frame image.
In S10, current frame image is the preview image be presented on the display unit of electronic equipment, and the destination object determined from current frame image can be personage, animal or building etc.Concrete, in the embodiment of the present application, can by following three kinds of mode determination destination objects.
First kind of way, carries out object Intelligent Recognition to described current frame image, determines the first object from described current frame image.
Specifically, intelligent object identification can be recognition of face, then will comprise the whole human body image of face as destination object.Intelligent object identification also can be colour recognition, outline identification etc., and the object of identification can a water tumbler, and an animal or a building building etc., the application is not restricted this.
The second way, the First Eigenvalue obtaining the first object on described current frame image and the default characteristic value pre-set; Judge whether described the First Eigenvalue mates with described default characteristic value; If described the First Eigenvalue and described default characteristic matching, determine that described first object is described destination object.
In the second way, the characteristic value of multiple object in electronic equipment or cloud server, can be prestored.For example, the characteristic value of object can be the face characteristic value of user, the contour feature value etc. of animal; The characteristic value of the object that the characteristic value preset was taken before can being electronic equipment also can be the characteristic value of the object of user's typing; Default characteristic value can be kept in the local data base of electronic equipment, and also can be kept in cloud database, the application does not all limit this.
Further, obtain the First Eigenvalue of the first object on current frame image at electronic equipment after, the First Eigenvalue is mated with the default characteristic value in local data base or cloud database, if there is the default characteristic value of mating with described the First Eigenvalue, then determine that the first object is described destination object.
The third mode, if Non-precondition characteristic value in electronic equipment, or the default characteristic value of not mating with the First Eigenvalue of the first object, then can contrast current frame image and previous frame image, detect moving object.If there is moving object, then using moving object as described destination object.
In the embodiment of the present application, after determining destination object by S10, electronic equipment performs S11, determines the first area image of described destination object in described current frame image.Concrete, electronic equipment can calculate the outline of destination object, determines that the image in described outline is described first area image, and then from current frame image, determines the first area image at destination object place.
Next, electronic equipment enters S12, processes described first area image, obtains second area image.In the embodiment of the present application, following three kinds of implementations of S12 are described, in specific implementation process, are not limited to following three kinds of implementations.
First kind of way, carries out convergent-divergent to described first area image.
Specifically, first, electronic equipment needs the scaling determining first area image.In specific implementation process, electronic equipment can according to user's gesture operation determination scaling on the display unit, also can according to the position relationship determination scaling of first area image in display unit.
Wherein, when according to the position relationship determination scaling of first area image in display unit, as shown in Figure 2, first, S20 is performed: the central point determining described first area image.
Specifically, as shown in Figure 3, first can determine the minimum rectangle ABCD comprising first area image, as shown in the dotted line frame in Fig. 3, the limit of rectangle is parallel with the limit of display unit respectively, then the geometric center of rectangle ABCD is defined as the central point of first area image, as shown in the intersection point of dotted line in Fig. 3.
Then, electronic equipment performs S21: choose multiple marginal point from the edge of described first area image.
In the embodiment of the present application, in a kind of optional embodiment, the number of marginal point can be determined.Suppose to need M marginal point, then according to the number of marginal point and the length of image border, first area, be spaced a distance and choose a marginal point, thus obtain M marginal point, wherein, spacing distance is the length at the edge between two marginal points.
In another kind of optional embodiment, spacing distance can be determined, hypothesis margin distance is 1mm, then according to spacing distance, on the edge of first area image, interval 1mm chooses a marginal point.
Next, electronic equipment performs S22: obtain first distance of described central point to the first marginal point in described multiple marginal point, and point at described central point on the direction of described first marginal point, described central point is to the second distance at the edge of described display unit.
As shown in Figure 3, central point is O, and the first marginal point is P, and the first distance is the distance of line segment OP, and point on the direction of the first marginal point P at central point O, the distance of line segment OQ is above-mentioned second distance.
Then, electronic equipment performs S23: the ratio determining described second distance and described first distance, and then obtains and described multiple marginal point multiple ratio one to one, and minimum ratio in described multiple ratio is defined as described scaling.For example, suppose OP=1, OQ=2, then the ratio determined by line segment OP and OQ is 2, if the ratio determined by other marginal points is all greater than 2, then determines that scaling is 2, by first area Nonlinear magnify twice.
Further, in the embodiment of the present application, in order to optimize the display effect of the image after amplification, described method also comprises:
According to described scaling, described first area image is zoomed in or out, obtain described second area image; Wherein, the resolution of described second area image is identical with the resolution of described first area image.In specific implementation process, can be amplified first area image by interpolation algorithm, obtain second area image, and then make the resolution of second area image identical with first area image.
In another kind of optional embodiment, the resolution due to the image acquisition units of electronic equipment is greater than the resolution of display unit, and therefore, the resolution of the image obtained by image acquisition units collection is greater than the resolution of display unit.Display unit, when showing image, can adjust the resolution of image, be adjusted to the resolution of mating with display unit and show.If the resolution of second area image is greater than the resolution of display unit, then electronic equipment is when showing synthetic frame image on the display unit, can to second area image, and the remainder except second area image in synthetic frame image adjusts, the resolution of last display image is on the display unit mated with display unit, and the resolution of second area image is identical with the resolution of remainder, there will not be the resolution of second area image lower than the situation of the resolution of remainder, in this case, electronic equipment can directly amplify first area image, obtain second area image.
In the embodiment of the present application, as shown in Figure 4, after obtaining the second area image after amplifying, electronic equipment performs S13, described second area image and described current frame image is synthesized, and obtains synthetic frame image.Specifically, comprising:
S131: the central point and the primary importance of described central point in described current frame image that obtain described first area image.Wherein, central point can for the central point determined in S20, and the primary importance of central point in current frame image can represent with pixel position.
Next, electronic equipment performs S132: determine the second place of described central point in described second area image.Wherein, the second place can use the pixel positional representation that central point is corresponding in second area image.
After acquisition primary importance and the second place, electronic equipment performs S133: described second area image and described current frame image are synthesized, and to make described primary importance and described second place overlap, obtains described synthetic frame image.For example, the pixel that the central point of first area image is corresponding is p 1, the pixel that the central point of second area image is corresponding is p 2, then when composograph, by by pixel p 1with pixel p 2overlap, and then obtain described synthetic frame image.
Next, the second implementation of S12 is described.
The second way, moves described first area image.
First, electronic equipment determines primary importance from described current frame image.In the embodiment of the present application, electronic equipment can determine primary importance according to user's gesture operation on the display unit, for example, after determining first area image, user can drag described first area image on the display unit, first area image is dragged to primary importance from home position, and when user stops drag operation, the position of stopping is described primary importance.
Then, described first area image is moved to described primary importance from the second place at image place, described first area, obtain second area image.Wherein, the second place is the home position of first area image in current frame image.
Further, after moving first area image, can also repair the region, home position in current frame image, then, electronic equipment performs S13, the image after reparation and second area image is synthesized, and obtains synthetic frame image.
Next, the third implementation of S12 is described.
The third mode, carries out picture editting to described first area image.
Specifically, carry out editor to first area image and be included on the image of first area and increase light, increase filtering effects, it is outstanding etc. to carry out edge, and to obtain second area image, the application does not limit this.Then, electronic equipment performs S13, second area image and current frame image is synthesized, and obtains synthetic frame image.
In the embodiment of the present application, as shown in Figure 5, above-mentioned image processing method can also be applied in video process, then said method also comprises:
S30: carrying out in video process to image acquisition region, from other two field pictures, determining described destination object.Wherein, destination object is identical with the destination object determined from described current frame image, and such as: the destination object determined from current frame image is water tumbler, then the destination object determined from other two field pictures is also water tumbler.
After determine destination object from other two field pictures, electronic equipment performs S31: record in described current frame image the processing rule that described first area image processes, and according to described processing rule, the area image at destination object place in other two field pictures described is processed.
In the embodiment of the present application, following three kinds of implementations of S31 are described.
First kind of way, when electronic equipment carries out convergent-divergent for being operating as of carrying out of current frame image to first area image, S31 comprises, and records scaling when carrying out convergent-divergent to described first area image; According to described scaling, convergent-divergent is carried out to the area image at described destination object place.For example, suppose, when carrying out convergent-divergent to the first area image in current frame image, by first area Nonlinear magnify 2 times, then when processing other two field pictures, the area image at destination object place also to be amplified 2 times.
The second way, when electronic equipment moves first area image for being operating as of carrying out of current frame image, S31 comprises, record the position of described second area image in described synthetic frame image, or described second area image in described synthetic frame image with the relative position relation of at least one other object; According to described position or described relative position relation, the area image at described destination object place is moved.For example, suppose in synthetic frame image, the position at second area image place is a-quadrant, then, when processing other two field pictures, the area image at destination object place is moved to a-quadrant.Again such as: suppose in synthetic frame image, the position at second area image place is B object in image and between C object, then, when processing other two field pictures, moved between B object and C object by the area image at destination object place.
In specific implementation process, at least one other object can be people, animal, or building etc., the application does not limit this.By the second way, destination object can be moved to the position not allowing to take pictures by user, or the position that destination object can not arrive.
The third mode, when electronic equipment is edited first area image for being operating as of carrying out of current frame image, records edit effect when editing described first area image in described current frame image; According to described edit effect, the area image at described destination object place is edited.For example, suppose when editing the first area image in current frame image, filtering effects is increased to first area image, then, when processing other two field pictures, filtering effects is increased to the area image at destination object place.
In the embodiment of the present application, carrying out in video process to image acquisition region, by synthetic frame Image Saving in video file.Further, at the processing rule according to current frame image, after the destination object in other two field pictures is processed, the synthetic frame image generated also is saved in electronic equipment successively, thus generates complete video file according to other two field pictures.
Based on same inventive concept, the embodiment of the present application also provides a kind of electronic equipment, as shown in Figure 6, comprising:
Image acquisition units 40;
Processor 41, is connected with described image acquisition units 40, described processor 41 for: carrying out image acquisition region, in image acquisition process, from current frame image, determining destination object; Determine the first area image of described destination object in described current frame image; Described first area image is processed, obtains second area image; Described second area image and described current frame image are synthesized, obtains synthetic frame image.
Optionally, described processor 41 specifically for:
Calculate the outline of described destination object, determine that the image in described outline is described first area image.
Optionally, described processor 41 specifically for:
Described first area image is moved; Or
Convergent-divergent is carried out to described first area image; Or
Picture editting is carried out to described first area image.
Optionally, described processor 41 specifically for:
Described first area image is zoomed in or out according to a scaling, obtains described second area image.
Optionally, described processor 41 also for:
Described first area image is being zoomed in or out according to a scaling, before obtaining second area image, is determining the central point of described first area image;
Multiple marginal point is chosen from the edge of described first area image;
Obtain first distance of described central point to the first marginal point in described multiple marginal point, and point at described central point on the direction of described first marginal point, described central point is to the second distance at the edge of described display unit;
Determine the ratio of described second distance and described first distance, and then obtain and described multiple marginal point multiple ratio one to one, minimum ratio in described multiple ratio is defined as described scaling.
Optionally, described processor 41 specifically for:
According to described scaling, described first area image is zoomed in or out, obtain described second area image; Wherein, the resolution of described second area image is identical with the resolution of described first area image.
Optionally, described processor 41 specifically for:
Obtain central point and the primary importance of described central point in described current frame image of described first area image;
Determine the second place of described central point in described second area image;
Described second area image and described current frame image are synthesized, to make described primary importance and described second place overlap, obtains described synthetic frame image.
Optionally, described processor 41 specifically for:
Primary importance is determined from described current frame image;
Described first area image is moved to described primary importance from the second place at image place, described first area.
Optionally, described processor 41 also for:
Carrying out in video process to image acquisition region, from other two field pictures, determining described destination object;
Record to the processing rule that described first area image processes in described current frame image, and according to described processing rule, the area image at destination object place in other two field pictures described is processed.
Optionally, described processor 41 specifically for:
Record scaling when convergent-divergent carries out to described first area image;
According to described scaling, convergent-divergent is carried out to the area image at described destination object place.
Optionally, described processor 41 specifically for:
Record the position of described first area image in described synthetic frame image, or described first area image in described synthetic frame image with the relative position relation of at least one other object;
According to described position or described relative position relation, the area image at described destination object place is moved.
Optionally, described processor 41 specifically for:
Record edit effect when described first area image being edited in described current frame image;
According to described edit effect, the area image at described destination object place is edited.
Optionally, described processor 41 specifically for:
Object Intelligent Recognition is carried out to described current frame image, from described current frame image, determines the first object.
Optionally, described processor 41 specifically for:
The First Eigenvalue obtaining the first object on described current frame image and the default characteristic value pre-set;
Judge whether described the First Eigenvalue mates with described default characteristic value;
If described the First Eigenvalue and described default characteristic matching, determine that described first object is described destination object.
Based on same inventive concept, the embodiment of the present application also provides a kind of electronic equipment, as shown in Figure 7, comprising:
Destination object determining unit 50, for carrying out image acquisition region, in image acquisition process, determining destination object from current frame image;
First area image determination unit 51, for determining the first area image of described destination object in described current frame image;
First area graphics processing unit 52, for processing described first area image, obtains second area image;
Synthesis unit 53, for described second area image and described current frame image being synthesized, obtains synthetic frame image.
Optionally, first area image determination unit 51, specifically for the outline calculating described destination object, determines that the image in described outline is described first area image.
Optionally, described first area graphics processing unit 52 specifically for: described first area image is moved; Or
Convergent-divergent is carried out to described first area image; Or
Picture editting is carried out to described first area image.
Optionally, described first area graphics processing unit 52 specifically for: described first area image is zoomed in or out according to a scaling, obtains described second area image.
Optionally, described electronic equipment also comprises:
Central point determining unit, for being zoomed in or out according to a scaling by described first area image, before obtaining second area image, is determining the central point of described first area image;
Marginal point chooses unit, for choosing multiple marginal point from the edge of described first area image;
Distance obtains unit, and for obtaining first distance of described central point to the first marginal point in described multiple marginal point, and point at described central point on the direction of described first marginal point, described central point is to the second distance at the edge of described display unit;
Scaling determining unit, for determining the ratio of described second distance and described first distance, and then obtains and described multiple marginal point multiple ratio one to one, minimum ratio in described multiple ratio is defined as described scaling.
Optionally, described first area graphics processing unit 52 specifically for: according to described scaling, described first area image is zoomed in or out, obtains described second area image; Wherein, the resolution of described second area image is identical with the resolution of described first area image.
Optionally, described synthesis unit 53 is specifically for the central point and the primary importance of described central point in described current frame image that obtain described first area image;
Determine the second place of described central point in described second area image;
Described second area image and described current frame image are synthesized, to make described primary importance and described second place overlap, obtains described synthetic frame image.
Optionally, described first area graphics processing unit 52 specifically for: from described current frame image, determine primary importance;
Described first area image is moved to described primary importance from the second place at image place, described first area.
Optionally, described destination object determining unit 50 also for: carrying out in video process to image acquisition region, from other two field pictures, determining described destination object;
Described first area graphics processing unit 52 also for: record in described current frame image the processing rule that described first area image processes, and according to described processing rule, the area image at destination object place in other two field pictures described is processed.
Optionally, described first area graphics processing unit 52 also for: record scaling when convergent-divergent carries out to described first area image;
According to described scaling, convergent-divergent is carried out to the area image at described destination object place.
Optionally, described first area graphics processing unit 52 also for: record the position of described second area image in described synthetic frame image, or described second area image in described synthetic frame image with the relative position relation of at least one other object;
According to described position or described relative position relation, the area image at described destination object place is moved.
Optionally, described first area graphics processing unit 52 also for: record edit effect when described first area image being edited in described current frame image;
According to described edit effect, the area image at described destination object place is edited.
Optionally, described destination object determining unit 50 specifically for: object Intelligent Recognition is carried out to described current frame image, from described current frame image, determines the first object.
Optionally, described destination object determining unit 50 is specifically for: the First Eigenvalue obtaining the first object on described current frame image and the default characteristic value pre-set; Judge whether described the First Eigenvalue mates with described default characteristic value; If described the First Eigenvalue and described default characteristic matching, determine that described first object is described destination object.
By the one or more technical schemes in the embodiment of the present application, following one or more technique effect can be realized:
In the scheme of 1, the embodiment of the present application, carrying out in image acquisition process to image acquisition region, destination object is determined from current target image, and determine the first area image at destination object place, then first area image is processed, obtain second area image, finally, second area image and current frame image are synthesized, obtains synthetic frame image.Visible, in the scheme of the embodiment of the present application, whole image is not processed, but determine the destination object needing to carry out processing, and the first area image at destination object place is processed, thus reduce the burden of electronic equipment, and then alleviate and to exist in prior art, in the process of photographic images with process image, heavier technical problem born by the processor of electronic equipment, achieve at photographic images and process in the process of image, alleviating the technique effect of the processor burden of electronic equipment.
In the scheme of 2, the embodiment of the present application, by the process of IMAQ, original image is processed, obtain synthetic frame image, therefore, in the electronic device, only need the synthetic frame image that specimens preserving obtains, do not need to preserve original image, thus reduce the storage burden of electronic equipment.
In the scheme of 3, the embodiment of the present application, when processing described first area image, first area image can be zoomed in or out according to a scaling, thus obtain second area image, then, second area image and current frame image are synthesized.In prior art, because the shooting body is too little, or background is too large, can cause between the shooting body and background out of proportion.If bust shot main body, then can not comprise complete background image in image; If include complete background image, can reduce the shooting body, both of these case all can the patterning effect of effect diagram picture.By the scheme in the embodiment of the present application, bust shot main body, and keep background constant, both highlighted the shooting body, complete background image can be comprised again, and then the patterning effect of optimized image.
4, in the scheme of the embodiment of the present application, by determining the central point of first area image, central point is to the distance of first area image up contour point, and on the direction pointing to the first marginal point at central point central point to the second distance at the edge of display unit, then, determine the ratio of second distance and the first distance, and then obtain and multiple marginal point multiple ratio one to one, and ratio minimum in multiple ratio is defined as scaling, thus automatically determine the magnification ratio of first area image, and the first area image after amplifying can not exceed the edge of display unit, and then the patterning effect of optimized image.
In the scheme of 5, the embodiment of the present application, after first area image being reduced or amplifies, the resolution controlling first area image is constant, thus after first area image and current frame image are synthesized, avoid the problem that first area image is different with the resolution of other area images in current frame image except the image of first area, the display effect of optimized image.
Those skilled in the art should understand, embodiments of the invention can be provided as method, system or computer program.Therefore, the present invention can adopt the form of complete hardware embodiment, completely software implementation or the embodiment in conjunction with software and hardware aspect.And the present invention can adopt in one or more form wherein including the upper computer program implemented of computer-usable storage medium (including but not limited to magnetic disc store, CD-ROM, optical memory etc.) of computer usable program code.
The present invention describes with reference to according to the flow chart of the method for the embodiment of the present invention, equipment (system) and computer program and/or block diagram.Should understand can by the combination of the flow process in each flow process in computer program instructions realization flow figure and/or block diagram and/or square frame and flow chart and/or block diagram and/or square frame.These computer program instructions can being provided to the processor of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing device to produce a machine, making the instruction performed by the processor of computer or other programmable data processing device produce device for realizing the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
These computer program instructions also can be stored in can in the computer-readable memory that works in a specific way of vectoring computer or other programmable data processing device, the instruction making to be stored in this computer-readable memory produces the manufacture comprising command device, and this command device realizes the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
These computer program instructions also can be loaded in computer or other programmable data processing device, make on computer or other programmable devices, to perform sequence of operations step to produce computer implemented process, thus the instruction performed on computer or other programmable devices is provided for the step realizing the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
Specifically, the computer program instructions that image processing method in the embodiment of the present application is corresponding can be stored in CD, hard disk, on the storage mediums such as USB flash disk, read by an electronic equipment when the computer program instructions corresponding with image processing method in storage medium or when being performed, comprise the steps:
Carrying out image acquisition region, in image acquisition process, from current frame image, determining destination object;
Determine the first area image of described destination object in described current frame image;
Described first area image is processed, obtains second area image;
Described second area image and described current frame image are synthesized, obtains synthetic frame image.Optionally, that store in described storage medium and step: determine the first area image of described destination object in described current frame image, corresponding computer instruction, being specifically performed in process, specifically comprises the steps:
Calculate the outline of described destination object, determine that the image in described outline is described first area image.
Optionally, that store in described storage medium and step: process described first area image, corresponding computer instruction, being specifically performed in process, specifically comprises the steps:
Described first area image is moved; Or
Convergent-divergent is carried out to described first area image; Or
Picture editting is carried out to described first area image.
Optionally, that store in described storage medium and step: process described first area image, obtain second area image, corresponding computer instruction, being specifically performed in process, specifically comprises the steps:
Described first area image is zoomed in or out according to a scaling, obtains described second area image.
Optionally, other computer instruction is also stored in described storage medium, these computer instructions with step: described first area image is zoomed in or out according to a scaling, obtain second area image, before corresponding computer instruction is performed, being performed, comprising the steps: when being performed
Determine the central point of described first area image;
Multiple marginal point is chosen from the edge of described first area image;
Obtain first distance of described central point to the first marginal point in described multiple marginal point, and point at described central point on the direction of described first marginal point, described central point is to the second distance at the edge of described display unit;
Determine the ratio of described second distance and described first distance, and then obtain and described multiple marginal point multiple ratio one to one, minimum ratio in described multiple ratio is defined as described scaling.
Optionally, that store in described storage medium and step: zoomed in or out according to a scaling by described first area image, obtain second area image, corresponding computer instruction, being specifically performed in process, specifically comprises the steps:
According to described scaling, described first area image is zoomed in or out, obtain described second area image; Wherein, the resolution of described second area image is identical with the resolution of described first area image.
Optionally, that store in described storage medium and step: described second area image and described current frame image are synthesized, obtain synthetic frame image, corresponding computer instruction, being specifically performed in process, specifically comprises the steps:
Obtain central point and the primary importance of described central point in described current frame image of described first area image;
Determine the second place of described central point in described second area image;
Described second area image and described current frame image are synthesized, to make described primary importance and described second place overlap, obtains described synthetic frame image.
Optionally, that store in described storage medium and step: move described first area image, corresponding computer instruction, being specifically performed in process, specifically comprises the steps:
Primary importance is determined from described current frame image;
Described first area image is moved to described primary importance from the second place at image place, described first area.
Optionally, also store other computer instruction in described storage medium, these computer instructions comprise the steps: when being performed
Carrying out in video process to image acquisition region, from other two field pictures, determining described destination object;
Record to the processing rule that described first area image processes in described current frame image, and according to described processing rule, the area image at destination object place in other two field pictures described is processed.
Optionally, that store in described storage medium and step: record in described current frame image the processing rule that described first area image processes, and according to described processing rule, the area image at destination object place in other two field pictures described is processed, corresponding computer instruction, being specifically performed in process, specifically comprises the steps:
Record scaling when convergent-divergent carries out to described first area image;
According to described scaling, convergent-divergent is carried out to the area image at described destination object place.
Optionally, that store in described storage medium and step: record in described current frame image the processing rule that described first area image processes, and according to described processing rule, the area image at destination object place in other two field pictures described is processed, corresponding computer instruction, being specifically performed in process, specifically comprises the steps:
Record the position of described second area image in described synthetic frame image, or described second area image in described synthetic frame image with the relative position relation of at least one other object;
According to described position or described relative position relation, the area image at described destination object place is moved.
Optionally, that store in described storage medium and step: record in described current frame image the processing rule that described first area image processes, and according to described processing rule, the area image at destination object place in other two field pictures described is processed, corresponding computer instruction, being specifically performed in process, specifically comprises the steps:
Record edit effect when described first area image being edited in described current frame image;
According to described edit effect, the area image at described destination object place is edited.
Optionally, that store in described storage medium and step: determine destination object from current frame image, corresponding computer instruction, being specifically performed in process, specifically comprises the steps:
Object Intelligent Recognition is carried out to described current frame image, from described current frame image, determines the first object.
Optionally, that store in described storage medium and step: determine destination object from current frame image, corresponding computer instruction, being specifically performed in process, specifically comprises the steps:
The First Eigenvalue obtaining the first object on described current frame image and the default characteristic value pre-set;
Judge whether described the First Eigenvalue mates with described default characteristic value;
If described the First Eigenvalue and described default characteristic matching, determine that described first object is described destination object.
Although describe the preferred embodiments of the present invention, those skilled in the art once obtain the basic creative concept of cicada, then can make other change and amendment to these embodiments.So claims are intended to be interpreted as comprising preferred embodiment and falling into all changes and the amendment of the scope of the invention.
Obviously, those skilled in the art can carry out various change and modification to the present invention and not depart from the spirit and scope of the present invention.Like this, if these amendments of the present invention and modification belong within the scope of the claims in the present invention and equivalent technologies thereof, then the present invention is also intended to comprise these change and modification.

Claims (29)

1. an image processing method, comprising:
Carrying out image acquisition region, in image acquisition process, from current frame image, determining destination object;
Determine the first area image of described destination object in described current frame image;
Described first area image is processed, obtains second area image;
Described second area image and described current frame image are synthesized, obtains synthetic frame image.
2. the method for claim 1, is characterized in that, determines the first area image of described destination object in described current frame image, comprising:
Calculate the outline of described destination object, determine that the image in described outline is described first area image.
3. the method for claim 1, is characterized in that, processes, comprising described first area image:
Described first area image is moved; Or
Convergent-divergent is carried out to described first area image; Or
Picture editting is carried out to described first area image.
4. method as claimed in claim 3, is characterized in that, process described first area image, obtains second area image, comprising:
Described first area image is zoomed in or out according to a scaling, obtains described second area image.
5. method as claimed in claim 4, is characterized in that, zoomed in or out by described first area image according to a scaling, and before obtaining second area image, described method also comprises:
Determine the central point of described first area image;
Multiple marginal point is chosen from the edge of described first area image;
Obtain first distance of described central point to the first marginal point in described multiple marginal point, and point at described central point on the direction of described first marginal point, described central point is to the second distance at the edge of described display unit;
Determine the ratio of described second distance and described first distance, and then obtain and described multiple marginal point multiple ratio one to one, minimum ratio in described multiple ratio is defined as described scaling.
6. method as claimed in claim 5, is characterized in that, zoomed in or out by described first area image according to a scaling, obtains second area image, comprising:
According to described scaling, described first area image is zoomed in or out, obtain described second area image; Wherein, the resolution of described second area image is identical with the resolution of described first area image.
7. method as claimed in claim 4, is characterized in that, described second area image and described current frame image are synthesized, and obtains synthetic frame image, comprising:
Obtain central point and the primary importance of described central point in described current frame image of described first area image;
Determine the second place of described central point in described second area image;
Described second area image and described current frame image are synthesized, to make described primary importance and described second place overlap, obtains described synthetic frame image.
8. method as claimed in claim 3, is characterized in that, move, comprising described first area image:
Primary importance is determined from described current frame image;
Described first area image is moved to described primary importance from the second place at image place, described first area.
9. the method for claim 1, is characterized in that, described method also comprises:
Carrying out in video process to image acquisition region, from other two field pictures, determining described destination object;
Record to the processing rule that described first area image processes in described current frame image, and according to described processing rule, the area image at destination object place in other two field pictures described is processed.
10. method as claimed in claim 9, it is characterized in that, record to the processing rule that described first area image processes in described current frame image, and according to described processing rule, the area image at destination object place in other two field pictures described is processed, comprising:
Record scaling when convergent-divergent carries out to described first area image;
According to described scaling, convergent-divergent is carried out to the area image at described destination object place.
11. methods as claimed in claim 9, it is characterized in that, record to the processing rule that described first area image processes in described current frame image, and according to described processing rule, the area image at destination object place in other two field pictures described is processed, comprising:
Record the position of described second area image in described synthetic frame image, or described second area image in described synthetic frame image with the relative position relation of at least one other object;
According to described position or described relative position relation, the area image at described destination object place is moved.
12. methods as claimed in claim 9, it is characterized in that, record to the processing rule that described first area image processes in described current frame image, and according to described processing rule, the area image at destination object place in other two field pictures described is processed, comprising:
Record edit effect when described first area image being edited in described current frame image;
According to described edit effect, the area image at described destination object place is edited.
13. the method for claim 1, is characterized in that, from current frame image, determine destination object, comprising:
Object Intelligent Recognition is carried out to described current frame image, from described current frame image, determines the first object.
14. the method for claim 1, is characterized in that, from current frame image, determine destination object, comprising:
The First Eigenvalue obtaining the first object on described current frame image and the default characteristic value pre-set;
Judge whether described the First Eigenvalue mates with described default characteristic value;
If described the First Eigenvalue and described default characteristic matching, determine that described first object is described destination object.
15. 1 kinds of electronic equipments, comprising:
Image acquisition units;
Processor, is connected with described image acquisition units, and described processor is used for: carrying out image acquisition region, in image acquisition process, from current frame image, determining destination object; Determine the first area image of described destination object in described current frame image; Described first area image is processed, obtains second area image; Described second area image and described current frame image are synthesized, obtains synthetic frame image.
16. electronic equipments as claimed in claim 15, is characterized in that, described processor specifically for:
Calculate the outline of described destination object, determine that the image in described outline is described first area image.
17. electronic equipments as claimed in claim 15, is characterized in that, described processor specifically for:
Described first area image is moved; Or
Convergent-divergent is carried out to described first area image; Or
Picture editting is carried out to described first area image.
18. electronic equipments as claimed in claim 17, is characterized in that, described processor specifically for:
Described first area image is zoomed in or out according to a scaling, obtains described second area image.
19. electronic equipments as claimed in claim 18, is characterized in that, described processor also for:
Described first area image is being zoomed in or out according to a scaling, before obtaining second area image, is determining the central point of described first area image;
Multiple marginal point is chosen from the edge of described first area image;
Obtain first distance of described central point to the first marginal point in described multiple marginal point, and point at described central point on the direction of described first marginal point, described central point is to the second distance at the edge of described display unit;
Determine the ratio of described second distance and described first distance, and then obtain and described multiple marginal point multiple ratio one to one, minimum ratio in described multiple ratio is defined as described scaling.
20. electronic equipments as claimed in claim 19, is characterized in that, described processor specifically for:
According to described scaling, described first area image is zoomed in or out, obtain described second area image; Wherein, the resolution of described second area image is identical with the resolution of described first area image.
21. electronic equipments as claimed in claim 18, is characterized in that, described processor specifically for:
Obtain central point and the primary importance of described central point in described current frame image of described first area image;
Determine the second place of described central point in described second area image;
Described second area image and described current frame image are synthesized, to make described primary importance and described second place overlap, obtains described synthetic frame image.
22. electronic equipments as claimed in claim 17, is characterized in that, described processor specifically for:
Primary importance is determined from described current frame image;
Described first area image is moved to described primary importance from the second place at image place, described first area.
23. electronic equipments as claimed in claim 15, is characterized in that, described processor also for:
Carrying out in video process to image acquisition region, from other two field pictures, determining described destination object;
Record to the processing rule that described first area image processes in described current frame image, and according to described processing rule, the area image at destination object place in other two field pictures described is processed.
24. electronic equipments as claimed in claim 23, is characterized in that, described processor specifically for:
Record scaling when convergent-divergent carries out to described first area image;
According to described scaling, convergent-divergent is carried out to the area image at described destination object place.
25. electronic equipments as claimed in claim 23, is characterized in that, described processor specifically for:
Record the position of described second area image in described synthetic frame image, or described second area image in described synthetic frame image with the relative position relation of at least one other object;
According to described position or described relative position relation, the area image at described destination object place is moved.
26. electronic equipments as claimed in claim 23, is characterized in that, described processor specifically for:
Record edit effect when described first area image being edited in described current frame image;
According to described edit effect, the area image at described destination object place is edited.
27. electronic equipments as claimed in claim 15, is characterized in that, described processor specifically for:
Object Intelligent Recognition is carried out to described current frame image, from described current frame image, determines the first object.
28. electronic equipments as claimed in claim 15, is characterized in that, described processor specifically for:
The First Eigenvalue obtaining the first object on described current frame image and the default characteristic value pre-set;
Judge whether described the First Eigenvalue mates with described default characteristic value;
If described the First Eigenvalue and described default characteristic matching, determine that described first object is described destination object.
29. 1 kinds of electronic equipments, comprising:
Destination object determining unit, for carrying out image acquisition region, in image acquisition process, determining destination object from current frame image;
First area image determination unit, for determining the first area image of described destination object in described current frame image;
First area graphics processing unit, for processing described first area image, obtains second area image;
Synthesis unit, for described second area image and described current frame image being synthesized, obtains synthetic frame image.
CN201510583839.7A 2015-09-14 2015-09-14 A kind of image processing method and electronic equipment Pending CN105227867A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510583839.7A CN105227867A (en) 2015-09-14 2015-09-14 A kind of image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510583839.7A CN105227867A (en) 2015-09-14 2015-09-14 A kind of image processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN105227867A true CN105227867A (en) 2016-01-06

Family

ID=54996516

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510583839.7A Pending CN105227867A (en) 2015-09-14 2015-09-14 A kind of image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN105227867A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844256A (en) * 2016-04-07 2016-08-10 广州盈可视电子科技有限公司 Panorama video frame image processing method and device
CN106210510A (en) * 2016-06-28 2016-12-07 广东欧珀移动通信有限公司 A kind of photographic method based on Image Adjusting, device and terminal
CN108184066A (en) * 2018-01-10 2018-06-19 上海展扬通信技术有限公司 Shooting processing method, capture apparatus and the readable storage medium storing program for executing of photo
CN109587401A (en) * 2019-01-02 2019-04-05 广州市奥威亚电子科技有限公司 The more scene capture realization method and systems of electronic platform
CN109951634A (en) * 2019-03-14 2019-06-28 Oppo广东移动通信有限公司 Image composition method, device, terminal and storage medium
WO2020108082A1 (en) * 2018-11-27 2020-06-04 Oppo广东移动通信有限公司 Video processing method and device, electronic equipment and computer readable medium
CN111386549A (en) * 2019-04-04 2020-07-07 合刃科技(深圳)有限公司 Method and system for reconstructing mixed type hyperspectral image
CN111669502A (en) * 2020-06-19 2020-09-15 北京字节跳动网络技术有限公司 Target object display method and device and electronic equipment
CN113255564A (en) * 2021-06-11 2021-08-13 上海交通大学 Real-time video recognition accelerator architecture based on key object splicing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101472158A (en) * 2007-12-27 2009-07-01 上海银晨智能识别科技有限公司 Network photographic device based on human face detection and image forming method
CN101510972A (en) * 2009-03-04 2009-08-19 南京大学 Local real time stepless zooming device for video image
CN102164234A (en) * 2010-02-09 2011-08-24 株式会社泛泰 Apparatus having photograph function
CN102714761A (en) * 2009-12-29 2012-10-03 夏普株式会社 Image processing device, image processing method, and image processing program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101472158A (en) * 2007-12-27 2009-07-01 上海银晨智能识别科技有限公司 Network photographic device based on human face detection and image forming method
CN101510972A (en) * 2009-03-04 2009-08-19 南京大学 Local real time stepless zooming device for video image
CN102714761A (en) * 2009-12-29 2012-10-03 夏普株式会社 Image processing device, image processing method, and image processing program
CN102164234A (en) * 2010-02-09 2011-08-24 株式会社泛泰 Apparatus having photograph function

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844256A (en) * 2016-04-07 2016-08-10 广州盈可视电子科技有限公司 Panorama video frame image processing method and device
CN106210510A (en) * 2016-06-28 2016-12-07 广东欧珀移动通信有限公司 A kind of photographic method based on Image Adjusting, device and terminal
CN106210510B (en) * 2016-06-28 2019-04-30 Oppo广东移动通信有限公司 A kind of photographic method based on Image Adjusting, device and terminal
CN108184066A (en) * 2018-01-10 2018-06-19 上海展扬通信技术有限公司 Shooting processing method, capture apparatus and the readable storage medium storing program for executing of photo
WO2020108082A1 (en) * 2018-11-27 2020-06-04 Oppo广东移动通信有限公司 Video processing method and device, electronic equipment and computer readable medium
CN109587401B (en) * 2019-01-02 2021-10-08 广州市奥威亚电子科技有限公司 Electronic cloud deck multi-scene shooting implementation method and system
CN109587401A (en) * 2019-01-02 2019-04-05 广州市奥威亚电子科技有限公司 The more scene capture realization method and systems of electronic platform
CN109951634A (en) * 2019-03-14 2019-06-28 Oppo广东移动通信有限公司 Image composition method, device, terminal and storage medium
CN109951634B (en) * 2019-03-14 2021-09-03 Oppo广东移动通信有限公司 Image synthesis method, device, terminal and storage medium
CN111386549A (en) * 2019-04-04 2020-07-07 合刃科技(深圳)有限公司 Method and system for reconstructing mixed type hyperspectral image
CN111386549B (en) * 2019-04-04 2023-10-13 合刃科技(深圳)有限公司 Method and system for reconstructing hybrid hyperspectral image
CN111669502A (en) * 2020-06-19 2020-09-15 北京字节跳动网络技术有限公司 Target object display method and device and electronic equipment
WO2021254502A1 (en) * 2020-06-19 2021-12-23 北京字节跳动网络技术有限公司 Target object display method and apparatus and electronic device
CN111669502B (en) * 2020-06-19 2022-06-24 北京字节跳动网络技术有限公司 Target object display method and device and electronic equipment
CN113255564B (en) * 2021-06-11 2022-05-06 上海交通大学 Real-time video identification accelerator based on key object splicing
CN113255564A (en) * 2021-06-11 2021-08-13 上海交通大学 Real-time video recognition accelerator architecture based on key object splicing

Similar Documents

Publication Publication Date Title
CN105227867A (en) A kind of image processing method and electronic equipment
JP5384190B2 (en) Method and apparatus for performing touch adjustment in an imaging device
EP3457683B1 (en) Dynamic generation of image of a scene based on removal of undesired object present in the scene
US10142522B2 (en) User feedback for real-time checking and improving quality of scanned image
CN105120172A (en) Photographing method for front and rear cameras of mobile terminal and mobile terminal
CN113508416B (en) Image fusion processing module
AU2017254859A1 (en) Method, system and apparatus for stabilising frames of a captured video sequence
US11956527B2 (en) Multi-camera post-capture image processing
US10317777B2 (en) Automatic zooming method and apparatus
CN110622207B (en) System and method for cross-fading image data
WO2016009421A1 (en) Automatic image composition
US11334961B2 (en) Multi-scale warping circuit for image fusion architecture
CN103019537A (en) Image preview method and image preview device
CN112954193B (en) Shooting method, shooting device, electronic equipment and medium
WO2018032702A1 (en) Image processing method and apparatus
CN111669492A (en) Method for processing shot digital image by terminal and terminal
JP2020091745A (en) Imaging support device and imaging support method
CN111201773A (en) Photographing method and device, mobile terminal and computer readable storage medium
US11798146B2 (en) Image fusion architecture
CN114500844A (en) Shooting method and device and electronic equipment
CN107395970A (en) A kind of photographic method and camera arrangement for intelligent terminal
CN112396669A (en) Picture processing method and device and electronic equipment
KR101511868B1 (en) Multimedia shot method and system by using multi camera device
CN103050110A (en) Method, device and system for image adjustment
WO2023042604A1 (en) Dimension measurement device, dimension measurement method, and program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160106