CN106210510A - A kind of photographic method based on Image Adjusting, device and terminal - Google Patents
A kind of photographic method based on Image Adjusting, device and terminal Download PDFInfo
- Publication number
- CN106210510A CN106210510A CN201610507430.1A CN201610507430A CN106210510A CN 106210510 A CN106210510 A CN 106210510A CN 201610507430 A CN201610507430 A CN 201610507430A CN 106210510 A CN106210510 A CN 106210510A
- Authority
- CN
- China
- Prior art keywords
- image area
- target
- image
- photographing
- frame picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000006073 displacement reaction Methods 0.000 claims description 9
- 230000000694 effects Effects 0.000 abstract description 15
- 238000004891 communication Methods 0.000 description 17
- 230000006870 function Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 8
- 230000036544 posture Effects 0.000 description 7
- 230000033001 locomotion Effects 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000005236 sound signal Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 210000002310 elbow joint Anatomy 0.000 description 1
- 210000000629 knee joint Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 210000003857 wrist joint Anatomy 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses a kind of photographic method based on Image Adjusting, device and terminal.Obtaining object region from target frame picture, described object region is the image-region that the destination object in described target frame picture is corresponding;According to user instruction, described object region is adjusted;If the physical location of described destination object is mated with adjusting result, then take pictures.Object region in target frame picture can be adjusted by the embodiment of the present invention according to user instruction, when the physical location of destination object is taken pictures with adjusting when result is mated.The embodiment of the present invention can realize virtual be adjusted destination object in target frame picture by adjusting object region, so without mobile destination object entity, time saving and energy saving, reach to improve Consumer's Experience, the effect of saving Setup Cost.Meanwhile, virtual destination object is adjusted in target frame picture, enable photographer's quick obtaining Adjustment effect, improve efficiency of taking pictures.
Description
Technical Field
The embodiment of the invention relates to a photographing technology, in particular to a photographing method, a photographing device and a photographing terminal based on image adjustment.
Background
As the photographing function on smart terminals such as smart phones becomes mature, the frequency of photographing by users using smart terminals such as smart phones becomes higher and higher.
The intelligent terminal displays the frame image acquired by the camera through the display screen, and the user clicks the photographing button and then stores the currently acquired frame image as a photo.
When taking a picture of a movable object, it is often necessary to shift the object in order to achieve a better picture layout. However, when the object is adjusted to shift, the user is required to move the entity of the object to know the layout effect of the shifted object, which is time-consuming, labor-consuming and inefficient.
Disclosure of Invention
The invention provides a photographing method, a photographing device and a photographing terminal based on image adjustment, so that the layout effect of an object after movement can be obtained without moving an object entity, time and labor are saved, and the photographing efficiency is improved.
In a first aspect, an embodiment of the present invention provides a photographing method based on image adjustment, including:
acquiring a target image area from a target frame picture, wherein the target image area is an image area corresponding to a target object in the target frame picture;
adjusting the target image area according to a user instruction;
and if the actual position of the target object is matched with the adjustment result, photographing is carried out.
In a second aspect, an embodiment of the present invention further provides a photographing apparatus based on image adjustment, including:
the image area acquisition unit is used for acquiring a target image area from a target frame picture, wherein the target image area is an image area corresponding to a target object in the target frame picture;
the adjusting unit is used for adjusting the target image area acquired by the image area acquiring unit according to a user instruction;
and the photographing unit is used for photographing if the actual position of the target object is matched with the adjusting result of the adjusting unit.
In a third aspect, an embodiment of the present invention further provides a terminal, including the photographing apparatus based on image adjustment shown in the second aspect.
The embodiment of the invention can adjust the target image area in the target frame picture according to the user instruction, and take a picture when the actual position of the target object is matched with the adjustment result. In the prior art, the entity of the target object needs to be moved to determine the photographing effect after the movement, and the photographing efficiency is low. The embodiment of the invention can realize the virtual adjustment of the target object in the target frame picture by adjusting the target image area, further does not need to move the target object entity, saves time and labor, and achieves the effects of improving the user experience and saving the adjustment cost. Meanwhile, the target object is virtually adjusted in the target frame picture, so that a photographer can quickly obtain an adjustment effect, and the photographing efficiency is improved.
Drawings
Fig. 1 is a flowchart of a photographing method based on image adjustment according to a first embodiment of the present invention;
FIG. 2 is a flowchart of a photographing method based on image adjustment according to a second embodiment of the present invention;
FIG. 3 is a schematic diagram of human body image region translation according to a second embodiment of the present invention;
FIG. 4 is a schematic diagram of a human body model adjustment according to a second embodiment of the present invention;
fig. 5 is a flowchart of a photographing method based on image adjustment according to a third embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating virtual image adjustment of a static mold according to a third embodiment of the present invention;
fig. 7 is a flowchart of a photographing method based on image adjustment according to a fourth embodiment of the present invention;
fig. 8 is a flowchart of a photographing method based on image adjustment according to a fifth embodiment of the present invention;
fig. 9 is a flowchart of a photographing method based on image adjustment according to a sixth embodiment of the present invention;
fig. 10 is a schematic structural diagram of a photographing apparatus based on image adjustment according to a seventh embodiment of the present invention;
fig. 11 is a schematic structural diagram of a mobile terminal according to an eighth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a photographing method based on image adjustment according to an embodiment of the present invention, where the present embodiment is applicable to a case of photographing using an intelligent terminal, and the method may be executed by an intelligent terminal such as a smart phone or a tablet computer, and specifically includes the following steps:
step 110, acquiring a target image area from the target frame picture.
And the target image area is an image area corresponding to a target object in the target frame picture.
And displaying the frame picture acquired by the camera in real time through the display screen. The target frame picture may be a current frame picture displayed in the display screen. Since the frame image acquired by the camera may change at any time, the target frame image may also be the frame image selected by the photographer at a certain time and corresponding to the certain time. For example, if the user is at 12:00, if a certain target image area of the frame picture displayed in the display screen is adjusted, it is determined that the user selects to determine the frame picture displayed in the 12:00 display screen as the target frame picture at 12: 00.
Firstly, the contour lines of all objects are searched from a target frame picture through image analysis, and a closed area formed by the contour lines forms a first image area. When image analysis is carried out, the boundaries of different color blocks can be identified through a boundary tracking algorithm, and then the contour line of the object is obtained. Further, after the first image area is determined, the obtained first image area is screened according to the category of the target object. Categories of target objects such as people, animals, furniture, cars, airplanes, etc. Next, the user selects one or more first image regions among the obtained first image regions as target image regions.
And step 120, adjusting the target image area according to the user instruction.
The adjustment comprises moving and rotating. If the target image area is a human body image area, the adjustment further comprises the adjustment of the human body posture. When the number of the target image areas is multiple, the multiple target image areas can be adjusted according to the same displacement vector input by the user, and the user can adjust the single target image area by independently dragging.
Illustratively, after the user selects N (N is a positive integer greater than 1) target image areas, the N target image areas are adjusted uniformly by default. For example, the user prevents a finger from being placed on one target image area a displayed on the touch display screen, and then drags the target image area a from the current position a-1 to another position a-2. A displacement vector is determined based on the coordinates a-1 and a-2 and the remaining N-1 of the N target image areas are moved according to the displacement vector. The user can cancel each target image area for unified adjustment by clicking the button such as 'release combination', and then the user can adjust the single target image area.
And step 130, if the actual position of the target object is matched with the adjustment result, photographing is carried out.
If the number of the target image areas is multiple, photographing is carried out when corresponding target objects exist in all the adjusted target image areas.
The embodiment can adjust the target image area in the target frame picture according to the user instruction, and take a picture when the actual position of the target object is matched with the adjustment result. In the prior art, the entity of the target object needs to be moved to determine the photographing effect after the movement, and the photographing efficiency is low. The embodiment can realize the virtual target object adjustment in the target frame picture by adjusting the target image area, and then does not need to move the target object entity, thereby saving time and labor, and achieving the effects of improving the user experience and saving the adjustment cost. Meanwhile, the target object is virtually adjusted in the target frame picture, so that a photographer can quickly obtain an adjustment effect, and the photographing efficiency is improved.
Example two
Fig. 2 is a flowchart of a photographing method based on image adjustment according to a second embodiment of the present invention, and as a further description of the first embodiment, step 110, obtaining a target image area from a target frame picture, may be implemented in the following manner:
and 110a, acquiring a human body image area from the target frame picture.
The human body image area is an image area corresponding to a human body outline in the target frame picture. Because the human body contour has special adjustment, the characteristic vector of the human body contour can be obtained through modes of machine learning and the like, and the human body image area can be searched from the image areas of various objects according to the characteristic vector of the human body contour.
Illustratively, as shown in the left side of fig. 3, the step 120a obtains 3 human body image areas, where the three human body image areas correspond to one person with a higher height and two persons with a lower height respectively, and are two persons with a lower height and a person with a higher height sequentially from left to right. The user finds that the photos shot by the standing positions are not good, so that the user can move the three human body image areas in modes of dragging and the like to obtain the standing positions which are considered by the user to have better shooting effects, as shown in the right side of the figure 3, the people with higher heights are located in the middle, and the people with lower manuscript examination are located on two sides of the people with higher heights respectively.
In actual use, a smartphone or tablet computer is typically used to take a picture of a person. When people take photos, especially when people take photos of multiple people, the positions and the figures of the multiple people need to be reasonably arranged so as to achieve a better photographing effect. According to the embodiment, the person image area in the target frame picture is obtained, so that a photographer can adjust the person image area in the target frame picture, the position of the person is virtually adjusted, and the photographing efficiency of the person is improved.
Since different photographing postures (also called poses) of persons have large differences in photographing effects, there is a need for designing a virtual photographing posture of a person to be photographed in addition to a need for moving a virtual image (i.e., a person image region) of the person to be photographed when photographing. Therefore, moving only the human body image area does not satisfy the above-mentioned needs of the user. Correspondingly, step 120, further adjusting the target image area according to the user instruction includes:
and step 120a, adjusting the body state corresponding to the human body image area according to the body state adjusting instruction input by the user.
Firstly, a human body model matched with the area of a human body image region is configured for the human body image region, and the human body model is a model with adjustable nodes. Then, the configured human body model is fitted to the human body image region by changing the positions of the adjustable nodes. And finally, receiving a displacement instruction input by a user aiming at the adjustable node. Wherein, adjustable node includes neck node, left and right elbow joint node, left and right wrist joint node, left and right knee joint node and trades the joint node from side to side. Further, the adjustable nodes further comprise finger knuckle nodes.
Further, if the human body image region does not include all regions of the human body model, for example, only the upper half of the body, the human body model corresponding to the human body image region is arranged, for example, only the self model is arranged.
For example, as shown in fig. 4, a schematic diagram of a human body model before adjustment is given on the left side of fig. 4, and a user can drag an adjustable node to further implement adjustment. In fig. 4, circles represent adjustable nodes. The right side of figure 4 shows a schematic view of the adjusted phantom. Furthermore, the user can drag the human body image area to determine the virtual position of the photographed object, and then push and adjust the adjustable node on the virtual human body model of the photographed object, so as to design the shape of the photographed object.
The embodiment can design (also called adjustment) the virtual position and the virtual posture of the photographed person when the person photographs, and then photograph according to the designed virtual position and the designed virtual posture, so that the person who is photographed can be prevented from being adjusted on the spot, the layout design efficiency of photographing is greatly improved, and the user experience is improved.
EXAMPLE III
Fig. 5 is a flowchart of a photographing method based on image adjustment according to a third embodiment of the present invention, and as a further description of the above embodiment, step 120, adjusting the target image area according to a user instruction, may be implemented in the following manner:
and step 120b, moving the position of the target image area according to the displacement instruction input by the user. Or,
and step 120c, rotating the target image area according to the rotation instruction input by the user.
When shooting a still, for example, shooting a plurality of objects placed on a desktop, after the smart phone or the tablet computer acquires the image area of each still, the user can drag the target image area or rotate the target image area, so as to achieve a better composition layout. When the rotation is performed, the rotation may be performed with a preset coordinate point as a rotation center point. The preset coordinate point may be any coordinate point in the target area, and is preferably a geometric center point of the target area.
Illustratively, as shown on the left side of fig. 6, two identical isosceles right triangle molds, a square mold and a semicircular mold are known above the table top, wherein the sides of the square are as long as the waist of the isosceles right triangle and the radius of the semicircle, and the four molds are scattered on the table top. At this time, the photographed image is dispersed and the composition is not good. The user can rotate and drag the virtual image areas of the molds to obtain the placement mode shown on the right side of the figure 6, and the four molds are placed again according to the placement mode on the right side of the figure 6, so that a better photographing effect is achieved.
The embodiment can adjust the target image area in a moving or rotating mode, is suitable for still shooting, especially in shooting of top views or bottom views, provides richer adjusting scenes for users, and improves user experience.
Example four
Fig. 7 is a flowchart of a photographing method based on image adjustment according to a fourth embodiment of the present invention, and as a further description of the foregoing embodiment, the step 110 of acquiring a target image area from a target frame picture includes:
and step 111, acquiring at least one first image area from the target frame picture, wherein the first image area is an image area corresponding to any object in the target frame picture.
Step 112, highlighting the at least one first image area.
The boundary of the first image area is highlighted using a blinking dotted line or highlighting a boundary line or the like.
And step 113, determining a target image area from the at least one first image area according to a selection instruction input by a user.
Optionally, when the user performs a click operation on a certain first image area, the selection is triggered to be executed, and the first image area where the user click operation is located is determined as the target image area.
Optionally, if the first image region includes both the human body image region and the non-human body image region (also referred to as a still image region), the same type of first image region is recommended for the user according to the type of the target image region first selected by the user. For example, if the user selects a human body image region, other human body image regions are recommended for the user. The user may refuse the recommendation and select the still image area; the target region may also be selected from the recommended human image regions.
According to the embodiment, the user can select the target image area in the first image area, the user can actively select different types of image areas as the target image area, the usability is improved, and the user experience is further improved.
EXAMPLE five
Fig. 8 is a flowchart of a photographing method based on image adjustment according to a fifth embodiment of the present invention, and as a further description of the foregoing embodiment, step 130, if the actual position of the target object matches the adjustment result, photographing is performed, which may be implemented in the following manner:
and step 131, if the actual position of the target object is matched with the adjustment result, outputting matching prompt information.
Optionally, the matching prompt information is output through the contour line of the target image area. For example, when matching, the contour line of the target image area is changed from a dotted line to a bold solid line of green.
Optionally, the matching prompt information is output in a text mode. For example, when there is a match, a prompt box is displayed, and "design style has been matched, please confirm the shot" or the like is displayed in the prompt box.
Optionally, the matching prompt information is output through a vibration function of the smart phone or the tablet computer.
Step 132, if a photographing instruction input by the user is received, photographing is performed.
And after the user acquires the matching prompt information, triggering a photographing instruction. Optionally, the photographing instruction is triggered by clicking a photographing button or by a gesture.
The embodiment can output prompt information during matching so that a user can visually and quickly acquire the matching effect. The photographing device can photograph according to a photographing instruction input by a user, the user can independently control, the problem that the machine automatically photographs after finding and matching to cause mistaken photographing and the like is avoided, the effectiveness of photographing behaviors is improved, and the user experience is improved.
EXAMPLE six
Fig. 9 is a flowchart of a photographing method based on image adjustment according to a sixth embodiment of the present invention, which further illustrates the above embodiment, before photographing, further includes:
step 140, adding a virtual image of at least one preset object to the target frame picture.
The preset object may be an object other than the target frame picture, and the virtual image may be a three-dimensional image or a two-dimensional image.
Optionally, the type of the preset object is determined according to the type of the target object corresponding to the target image area. For example, if the target object is a person, the type of the preset object is also a person.
Optionally, the type of the preset object is different from the target object. For example, the target object is a person, and the preset object is a static object, such as a prop.
According to the embodiment, the virtual image of the preset object outside the target frame picture can be added into the target frame picture, the editability of the target frame picture is improved, and the user experience is improved.
EXAMPLE seven
Fig. 10 is a schematic structural diagram of a photographing device based on image adjustment according to a seventh embodiment of the present invention, where the photographing device based on image adjustment is located in a terminal, and the terminal is an intelligent terminal such as a smart phone or a tablet computer, and the photographing device includes:
an image area acquiring unit 11, configured to acquire a target image area from a target frame picture, where the target image area is an image area corresponding to a target object in the target frame picture;
an adjusting unit 12, configured to adjust the target image area acquired by the image area acquiring unit 11 according to a user instruction;
and a photographing unit 13, configured to perform photographing if the actual position of the target object matches the adjustment result of the adjusting unit 12.
Further, the image area obtaining unit 11 is specifically configured to obtain a human body image area from a target frame picture, where the human body image area is an image area corresponding to a human body contour in the target frame picture.
Further, the adjusting unit 12 is specifically configured to adjust the body state corresponding to the human body image area according to a body state adjusting instruction input by a user.
Further, the adjusting unit 12 is specifically configured to:
moving the position of the target image area according to a displacement instruction input by a user; or,
and rotating the target image area according to a rotation instruction input by a user.
Further, the image area obtaining unit 11 is specifically configured to:
acquiring at least one first image area from a target frame picture, wherein the first image area is an image area corresponding to any object in the target frame picture;
highlighting the at least one first image region;
and determining a target image area from the at least one first image area according to a selection instruction input by a user.
Further, the photographing unit 13 is specifically configured to:
matching the actual position of the target object with the adjustment result, and outputting matching prompt information;
and if a photographing instruction input by the user is received, photographing.
Further, an adding unit 14 is also included,
the adding unit 14 is configured to add a virtual image of at least one preset object to the target frame picture.
The device can execute the methods provided by the first embodiment to the sixth embodiment of the invention, and has corresponding functional modules and beneficial effects for executing the methods. For details of the technology that are not described in detail in this embodiment, reference may be made to the methods provided in the first to sixth embodiments of the present invention.
Further, an embodiment of the present invention also provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are used for executing a photographing method based on image adjustment, the method including:
acquiring a target image area from a target frame picture, wherein the target image area is an image area corresponding to a target object in the target frame picture;
adjusting the target image area according to a user instruction;
and if the actual position of the target object is matched with the adjustment result, photographing is carried out.
Optionally, the computer-executable instructions, when executed by the computer processor, may be further configured to implement a technical solution of a photographing method based on image adjustment provided in any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
Example eight
Fig. 11 is a schematic structural diagram of a terminal according to an eighth embodiment of the present invention, where the terminal includes a photographing device based on image adjustment according to a seventh embodiment. In one implementation, the terminal is a mobile terminal, such as a smartphone or a tablet computer. The mobile terminal may include components such as a communication unit 21, a memory 22 including at least one computer-readable storage medium, an input unit 23, a display unit 24, a sensor 25, an audio circuit 26, a WIFI (Wireless Fidelity) module 27, a processor 28 including at least one processing core, and a power supply 29. Those skilled in the art will appreciate that the mobile terminal architecture shown in the figures is not intended to be limiting of mobile terminals and may include more or fewer components than those shown, or some of the components may be combined, or a different arrangement of components. Specifically, the method comprises the following steps:
the communication unit 21 may be used for receiving and transmitting information or signals during a call, and the communication unit 21 may be an RF (Radio Frequency) circuit, a router, a modem, or other network communication devices. In particular, when the communication unit 21 is an RF circuit, downlink information of the base station is received and then processed by one or more processors 28; in addition, data relating to uplink is transmitted to the base station. Generally, the RF circuit as a communication unit includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (Low Noise Amplifier), a duplexer, and the like. Further, the communication unit 21 can also communicate with a network and other devices by wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General packet radio Service), CDMA (Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (Long term evolution), e-mail, SMS (Short Messaging Service), etc. The memory 22 may be used to store software programs and modules, and the processor 28 executes various functional applications and data processing by executing the software programs and modules stored in the memory 22. The memory 22 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the mobile terminal, and the like. Further, the memory 22 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 22 may also include a memory controller to provide the processor 28 and the input unit 23 access to the memory 22.
The input unit 23 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. Preferably, the input unit 23 may include a touch-sensitive surface 231 and other input devices 232. The touch-sensitive surface 231, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface 231 using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 231 may comprise two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 28, and can receive and execute commands from the processor 28. In addition, the touch-sensitive surface 231 may be implemented in various types, such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 23 may comprise other input devices 232 in addition to the touch sensitive surface 231. Preferably, other input devices 232 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 24 may be used to display information input by or provided to the user and various graphical user interfaces of the mobile terminal, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 24 may include a Display panel 241, and optionally, the Display panel 241 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, the touch-sensitive surface 231 may overlay the display panel 241, and when the touch-sensitive surface 231 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 28 to determine the type of the touch event, and then the processor 28 provides a corresponding visual output on the display panel 241 according to the type of the touch event. Although in FIG. 11 the touch-sensitive surface 231 and the display panel 241 are implemented as two separate components for input and output functions, in some embodiments the touch-sensitive surface 231 may be integrated with the display panel 241 for input and output functions.
The mobile terminal may also include at least one sensor 25, such as a light sensor, motion sensor, and other sensors. The light sensor may include an ambient light sensor that may adjust the brightness of the display panel 241 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 241 and/or the backlight when the mobile terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
The audio circuit 26, speaker 261, and microphone 262 may provide an audio interface between the user and the mobile terminal. The audio circuit 26 may transmit the electrical signal converted from the received audio data to the speaker 261, and convert the electrical signal into a sound signal by the speaker 261 and output the sound signal; on the other hand, the microphone 262 converts the collected sound signal into an electric signal, which is received by the audio circuit 26 and converted into audio data, which is then processed by the audio data output processor 28, and then transmitted to, for example, another mobile terminal via the RF circuit 21, or the audio data is output to the memory 22 for further processing. The audio circuit 26 may also include an earbud jack to provide communication of a peripheral headset with the mobile terminal.
In order to realize wireless communication, a wireless communication unit 27 may be configured on the first mobile terminal, and the wireless communication unit 27 may be a WIFI module. WIFI belongs to a short-distance wireless transmission technology, and the mobile terminal can help a user to send and receive e-mails, browse web pages, access streaming media and the like through the wireless communication unit 27, and provides wireless broadband internet access for the user. Although the wireless communication unit 27 is shown in the drawing, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the disclosure.
The processor 28 may interface various portions of the overall handset using various interfaces and lines to perform various functions of the mobile terminal and process data by running or executing software programs and/or modules stored in the memory 22 and invoking data stored in the memory 22 to thereby monitor the overall handset. Alternatively, processor 28 may include one or more processing cores; preferably, the processor 28 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 28.
The mobile terminal also includes a power supply 29 (e.g., a battery) for powering the various components, which are preferably logically connected to the processor 28 via a power management system that provides management of charging, discharging, and power consumption. The power supply 29 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
It should be noted that the mobile terminal may further include a camera, a bluetooth module, and the like, which is not described herein again.
In this embodiment, the processor 28 is configured to:
acquiring a target image area from a target frame picture, wherein the target image area is an image area corresponding to a target object in the target frame picture;
adjusting the target image area according to a user instruction;
and if the actual position of the target object is matched with the adjustment result, photographing is carried out.
Further, acquiring the target image area from the target frame picture includes:
and acquiring a human body image area from the target frame picture, wherein the human body image area is an image area corresponding to the human body outline in the target frame picture.
Further, adjusting the target image area according to the user instruction includes:
and adjusting the body state corresponding to the human body image area according to a body state adjusting instruction input by the user.
Further, adjusting the target image area according to the user instruction includes:
moving the position of the target image area according to a displacement instruction input by a user; or,
and rotating the target image area according to a rotation instruction input by a user.
Further, acquiring the target image area from the target frame picture includes:
acquiring at least one first image area from a target frame picture, wherein the first image area is an image area corresponding to any object in the target frame picture;
highlighting the at least one first image region;
and determining a target image area from the at least one first image area according to a selection instruction input by a user.
Further, if the actual position of the target object matches the adjustment result, taking a picture includes:
matching the actual position of the target object with the adjustment result, and outputting matching prompt information;
and if a photographing instruction input by the user is received, photographing.
Further, before taking a picture, the method further comprises the following steps:
and adding a virtual image of at least one preset object into the target frame picture.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (15)
1. A photographing method based on image adjustment is characterized by comprising the following steps:
acquiring a target image area from a target frame picture, wherein the target image area is an image area corresponding to a target object in the target frame picture;
adjusting the target image area according to a user instruction;
and if the actual position of the target object is matched with the adjustment result, photographing is carried out.
2. The image adjustment-based photographing method according to claim 1, wherein the acquiring a target image area from a target frame picture comprises:
and acquiring a human body image area from a target frame picture, wherein the human body image area is an image area corresponding to a human body contour in the target frame picture.
3. The image adjustment-based photographing method according to claim 2, wherein the adjusting the target image area according to the user instruction comprises:
and adjusting the body state corresponding to the human body image area according to a body state adjusting instruction input by a user.
4. The image adjustment-based photographing method according to claim 1 or 2, wherein the adjusting the target image area according to the user instruction comprises:
moving the position of the target image area according to a displacement instruction input by a user; or,
and rotating the target image area according to a rotation instruction input by a user.
5. The image adjustment-based photographing method according to claim 1, wherein the acquiring a target image area from a target frame picture comprises:
acquiring at least one first image area from a target frame picture, wherein the first image area is an image area corresponding to any object in the target frame picture;
highlighting the at least one first image region;
and determining a target image area from the at least one first image area according to a selection instruction input by a user.
6. The image adjustment-based photographing method according to claim 1, wherein if the actual position of the target object matches the adjustment result, photographing is performed, including:
matching the actual position of the target object with the adjustment result, and outputting matching prompt information;
and if a photographing instruction input by the user is received, photographing.
7. The image adjustment-based photographing method according to claim 1, further comprising, before the photographing,:
and adding a virtual image of at least one preset object into the target frame picture.
8. A photographing device based on image adjustment is characterized by comprising:
the image area acquisition unit is used for acquiring a target image area from a target frame picture, wherein the target image area is an image area corresponding to a target object in the target frame picture;
the adjusting unit is used for adjusting the target image area acquired by the image area acquiring unit according to a user instruction;
and the photographing unit is used for photographing if the actual position of the target object is matched with the adjusting result of the adjusting unit.
9. The image adjustment-based photographing device according to claim 8, wherein the image region acquiring unit is specifically configured to acquire a human body image region from a target frame picture, and the human body image region is an image region corresponding to a human body contour in the target frame picture.
10. The image adjustment-based photographing device according to claim 9, wherein the adjusting unit is specifically configured to adjust the posture corresponding to the human body image area according to a posture adjustment instruction input by a user.
11. The image adjustment-based photographing device according to claim 8 or 9, wherein the adjusting unit is specifically configured to:
moving the position of the target image area according to a displacement instruction input by a user; or,
and rotating the target image area according to a rotation instruction input by a user.
12. The image adjustment-based photographing device according to claim 8, wherein the image region acquiring unit is specifically configured to:
acquiring at least one first image area from a target frame picture, wherein the first image area is an image area corresponding to any object in the target frame picture;
highlighting the at least one first image region;
and determining a target image area from the at least one first image area according to a selection instruction input by a user.
13. The image adjustment-based photographing device according to claim 8, wherein the photographing unit is specifically configured to:
matching the actual position of the target object with the adjustment result, and outputting matching prompt information;
and if a photographing instruction input by the user is received, photographing.
14. The image adjustment-based photographing apparatus according to claim 8, further comprising an adding unit,
the adding unit is used for adding a virtual image of at least one preset object into the target frame picture.
15. A terminal, characterized by comprising the image adjustment-based photographing apparatus according to any one of claims 8 to 14.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610507430.1A CN106210510B (en) | 2016-06-28 | 2016-06-28 | A kind of photographic method based on Image Adjusting, device and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610507430.1A CN106210510B (en) | 2016-06-28 | 2016-06-28 | A kind of photographic method based on Image Adjusting, device and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106210510A true CN106210510A (en) | 2016-12-07 |
CN106210510B CN106210510B (en) | 2019-04-30 |
Family
ID=57464033
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610507430.1A Active CN106210510B (en) | 2016-06-28 | 2016-06-28 | A kind of photographic method based on Image Adjusting, device and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106210510B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107864299A (en) * | 2017-12-25 | 2018-03-30 | 广东欧珀移动通信有限公司 | Image display method and Related product |
CN109525806A (en) * | 2017-09-20 | 2019-03-26 | 夏普株式会社 | Portable display apparatus, image supply device, display system |
WO2020093799A1 (en) * | 2018-11-06 | 2020-05-14 | 华为技术有限公司 | Image processing method and apparatus |
CN111479069A (en) * | 2020-04-23 | 2020-07-31 | 深圳创维-Rgb电子有限公司 | Camera control method, display terminal and computer storage medium |
CN113284052A (en) * | 2020-02-19 | 2021-08-20 | 阿里巴巴集团控股有限公司 | Image processing method and apparatus |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103577819A (en) * | 2012-08-02 | 2014-02-12 | 北京千橡网景科技发展有限公司 | Method and equipment for assisting and prompting photo taking postures of human bodies |
CN104717413A (en) * | 2013-12-12 | 2015-06-17 | 北京三星通信技术研究有限公司 | Shooting assistance method and equipment |
US20150373258A1 (en) * | 2014-06-24 | 2015-12-24 | Cyberlink Corp. | Systems and Methods for Automatically Capturing Digital Images Based on Adaptive Image-Capturing Templates |
CN105227867A (en) * | 2015-09-14 | 2016-01-06 | 联想(北京)有限公司 | A kind of image processing method and electronic equipment |
CN105306801A (en) * | 2014-06-09 | 2016-02-03 | 中兴通讯股份有限公司 | Shooting method and device and terminal |
US20160054903A1 (en) * | 2014-08-25 | 2016-02-25 | Samsung Electronics Co., Ltd. | Method and electronic device for image processing |
-
2016
- 2016-06-28 CN CN201610507430.1A patent/CN106210510B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103577819A (en) * | 2012-08-02 | 2014-02-12 | 北京千橡网景科技发展有限公司 | Method and equipment for assisting and prompting photo taking postures of human bodies |
CN104717413A (en) * | 2013-12-12 | 2015-06-17 | 北京三星通信技术研究有限公司 | Shooting assistance method and equipment |
CN105306801A (en) * | 2014-06-09 | 2016-02-03 | 中兴通讯股份有限公司 | Shooting method and device and terminal |
US20150373258A1 (en) * | 2014-06-24 | 2015-12-24 | Cyberlink Corp. | Systems and Methods for Automatically Capturing Digital Images Based on Adaptive Image-Capturing Templates |
US20160054903A1 (en) * | 2014-08-25 | 2016-02-25 | Samsung Electronics Co., Ltd. | Method and electronic device for image processing |
CN105227867A (en) * | 2015-09-14 | 2016-01-06 | 联想(北京)有限公司 | A kind of image processing method and electronic equipment |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109525806A (en) * | 2017-09-20 | 2019-03-26 | 夏普株式会社 | Portable display apparatus, image supply device, display system |
CN107864299A (en) * | 2017-12-25 | 2018-03-30 | 广东欧珀移动通信有限公司 | Image display method and Related product |
WO2020093799A1 (en) * | 2018-11-06 | 2020-05-14 | 华为技术有限公司 | Image processing method and apparatus |
US11917288B2 (en) | 2018-11-06 | 2024-02-27 | Huawei Technologies Co., Ltd. | Image processing method and apparatus |
CN113284052A (en) * | 2020-02-19 | 2021-08-20 | 阿里巴巴集团控股有限公司 | Image processing method and apparatus |
CN111479069A (en) * | 2020-04-23 | 2020-07-31 | 深圳创维-Rgb电子有限公司 | Camera control method, display terminal and computer storage medium |
CN111479069B (en) * | 2020-04-23 | 2021-09-24 | 深圳创维-Rgb电子有限公司 | Camera control method, display terminal and computer storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN106210510B (en) | 2019-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10636221B2 (en) | Interaction method between user terminals, terminal, server, system, and storage medium | |
CN109087239B (en) | Face image processing method and device and storage medium | |
US9779527B2 (en) | Method, terminal device and storage medium for processing image | |
RU2632153C2 (en) | Method, device and terminal for displaying virtual keyboard | |
CN106210510B (en) | A kind of photographic method based on Image Adjusting, device and terminal | |
EP3035283A1 (en) | Image processing method and apparatus, and terminal device | |
JP6557741B2 (en) | Picture combining method, terminal, and picture combining system | |
CN104954149B (en) | The method, apparatus and system of data sharing are carried out in Web conference | |
CN106204423B (en) | A kind of picture-adjusting method based on augmented reality, device and terminal | |
CN106127829B (en) | Augmented reality processing method and device and terminal | |
CN105989572B (en) | Picture processing method and device | |
US20150379163A1 (en) | Method and Apparatus for Creating Curved Surface Model | |
JP2016511875A (en) | Image thumbnail generation method, apparatus, terminal, program, and recording medium | |
CN106959761A (en) | A kind of terminal photographic method, device and terminal | |
CN111127595A (en) | Image processing method and electronic device | |
CN108415641A (en) | A kind of processing method and mobile terminal of icon | |
CN109857297A (en) | Information processing method and terminal device | |
CN105635553B (en) | Image shooting method and device | |
CN109151367A (en) | A kind of video call method and terminal device | |
CN108228033A (en) | A kind of message display method and mobile terminal | |
CN109683802A (en) | A kind of icon moving method and terminal | |
CN103399657A (en) | Mouse pointer control method, device and terminal device | |
CN109669656A (en) | A kind of information display method and terminal device | |
CN110536005A (en) | A kind of object display adjusting method and terminal | |
WO2021104162A1 (en) | Display method and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant after: OPPO Guangdong Mobile Communications Co., Ltd. Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant before: Guangdong OPPO Mobile Communications Co., Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |