CN110740246A - image correction method, mobile device and terminal device - Google Patents

image correction method, mobile device and terminal device Download PDF

Info

Publication number
CN110740246A
CN110740246A CN201810788605.XA CN201810788605A CN110740246A CN 110740246 A CN110740246 A CN 110740246A CN 201810788605 A CN201810788605 A CN 201810788605A CN 110740246 A CN110740246 A CN 110740246A
Authority
CN
China
Prior art keywords
image
included angle
determining
pupil
target user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810788605.XA
Other languages
Chinese (zh)
Inventor
费天翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Health Information Technology Ltd
Original Assignee
Alibaba Health Information Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Health Information Technology Ltd filed Critical Alibaba Health Information Technology Ltd
Priority to CN201810788605.XA priority Critical patent/CN110740246A/en
Publication of CN110740246A publication Critical patent/CN110740246A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Abstract

The application provides image correction methods, mobile equipment and terminal equipment, wherein the method comprises the steps of obtaining a face image of a target user by using a front camera assembly of the mobile terminal, correcting the position of a pupil image in the face image when the distance between the target user and the front camera assembly is determined to be smaller than a preset threshold value, enabling the pupil image to be located at the center of an eye area of the face image, and displaying the corrected face image.

Description

image correction method, mobile device and terminal device
Technical Field
The application belongs to the technical field of image processing, and particularly relates to image correction methods, mobile equipment and terminal equipment.
Background
At present, with the rapid development of mobile terminals, the functions of the mobile terminals are becoming more and more powerful, such as taking pictures, opening videos, chatting, etc. among them, the technology of taking pictures is now also increasingly regarded as important evaluation factors of the performance index of the mobile terminals.
However, when the self-shooting is carried out through the front camera, the distance between the portrait and the camera is very short, the shorter the distance between the portrait and the camera is under the condition that the distance between the camera and the imaging interface is not changed, the larger the deviation of the formed visual angle is, because the position of the portrait from the camera is particularly close during the self-shooting, the deviation of the visual angle formed when the person looks at the camera for shooting is relatively large, so that when the self-shooting can be previewed through the front camera in the mobile terminal, a conflict exists between a real visual angle and a preview, and the main body of the conflict exists in the case that the user focuses on the camera, the shot photo is a direct-view visual angle, but the user cannot preview the photo through a screen of the mobile terminal, the effect of the shot photo cannot be known, and the user has a distorted visual line of sight, and the preview angle of the user is when the user previews on the terminal.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The application aims to provide image correction methods, mobile equipment and terminal equipment, so that the deviation of the sight line of a user in the process of self-photographing when the user previews a photo is reduced, and the user experience is improved.
The application provides image rectification methods, mobile equipment and terminal equipment, which are realized by the following steps:
a method of image rectification, the method comprising:
acquiring a facial image of a target user by using a front camera assembly of the mobile terminal;
when the distance between a target user and the front camera shooting assembly is determined to be smaller than a preset threshold value, correcting the position of a pupil image in the face image to enable the pupil image to be located at the center of an eye area of the face image;
the corrected face image is displayed.
A mobile device, comprising:
the camera shooting assembly is used for acquiring a face image of a user;
a display component for displaying the acquired face image in real time;
a processing unit coupled to the display component for correcting a position of a pupil image in the face image such that the pupil image is located at a center position of an eye region of the face image.
a method of image rectification, the method comprising:
the method comprises the steps that a face image of a front camera is obtained under the condition that the front camera component of the mobile terminal is started and the distance between a target user and the camera is smaller than a preset threshold value;
correcting a pupil position of a target user in the facial image;
the corrected face image is displayed.
terminal device, comprising a processor and a memory for storing processor-executable instructions, wherein the processor executes the instructions to implement the following steps:
acquiring a facial image of a target user by using a front camera assembly of the mobile terminal;
when the distance between a target user and the front camera shooting assembly is determined to be smaller than a preset threshold value, correcting the position of a pupil image in the face image to enable the pupil image to be located at the center of an eye area of the face image;
the corrected face image is displayed.
computer readable storage media having stored thereon computer instructions which, when executed, implement the steps of the above-described method.
The image correction method provided by the application comprises the steps that when a front camera shooting assembly of a mobile terminal is started to obtain a facial image, when the distance between a target user and the front camera shooting assembly is determined to be smaller than a preset threshold value, the pupil position of the target user in the facial image is corrected, the corrected facial image is displayed, and therefore the problem that the sight of the user is askew in a preview interface when the existing user focuses on the sight on a screen of the terminal to preview the picture is solved, and the technical effect of improving user experience is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only the embodiments described in the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a method flow diagram of embodiments of an image rectification method provided herein;
FIG. 2 is a schematic diagram illustrating the principle of determining the horizontal offset angle and the vertical offset angle provided by the present application;
FIG. 3 is a schematic diagram of pupil position adjustment provided herein;
FIG. 4 is a schematic view of a capture interface provided herein;
FIG. 5 is a block diagram of a mobile terminal according to the present disclosure;
fig. 6 is a block diagram of the image correction apparatus according to the present application.
Detailed Description
For those skilled in the art to better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application , rather than all embodiments.
Considering that both the preview and direct view of the view existing in the conventional self-timer shooting process are not available, mainly because the camera is disposed at the top end of the terminal device, if the shooting result is previewed in real time, there will be a difference in the view between the pupil position and the camera, resulting in a shift in the pupil position.
Based on this, image correction display methods are proposed in this example, as shown in fig. 1, which may include the following steps:
step 101: acquiring a facial image of a target user by using a front camera assembly of the mobile terminal;
for example, an imaging image of a front camera of a mobile terminal can be acquired when the front camera is started to take a picture and the distance between a target user and the camera is less than a preset threshold value;
since the problem of self-timer shooting with the front camera is solved in this example, there is no problem of viewing angle deviation when using the self-timer stick or taking pictures remotely. Therefore, it is possible to first determine whether the condition requiring pupil correction is satisfied.
For this reason, the conditions for pupil correction may be set as: and determining whether the front camera takes a picture and whether the distance between the user and the camera is smaller than a preset threshold value, for example, setting the preset threshold value to be 0.3 m, and triggering pupil position correction if the distance is smaller than 0.3 m.
When confirming the distance, can adopt the range finding mode of infrared range finding, barrier range finding etc. specifically adopt which kind of mode can be confirmed according to actual demand, this application does not limit this.
Further , the range can be actively turned on, that is, once the front camera is turned on to take a picture, the range can be triggered to determine whether the pupil position needs to be corrected.
Step 102: correcting the pupil position of a target user in the imaged image; specifically, when it is determined that the distance between the target user and the front camera module is smaller than a preset threshold, the position of the pupil image in the face image may be corrected, so that the pupil image is located at the center of the eye area of the face image.
When pupil position correction is performed, the position correction value may be determined and then correction may be performed by the determined position correction value.
The position correction value is used to correct the pupil position in the imaged image so that the user looks at the preview interface of the terminal and the line of sight can be kept straight. Then, in order to calculate the correction value, it may be determined by the distance between the target user and the camera, the distance between the camera and the imaging position, and this movement is a dual movement in the horizontal direction and the vertical direction, and therefore, it is necessary to determine the horizontal offset angle and the vertical offset angle. Specifically, the position correction value may be determined according to the following steps:
s1, determining a horizontal direction included angle and a vertical direction included angle between the eyes of the target user and the camera;
specifically, the th distance between the eyes of the target user and the camera can be obtained, the second distance between the camera and the position of the eyes of the user imaged in the screen can be obtained, and then the th horizontal direction included angle and the th vertical direction included angle can be determined according to the th distance and the second distance.
S2: determining a second horizontal direction included angle and a second vertical direction included angle between the eyes of the target user and the positions of the eyes in the imaging image;
s3, determining a horizontal deviation included angle according to the included angle in the horizontal direction and the included angle in the second horizontal direction;
s4, determining a vertical deviation included angle according to the vertical direction included angle and the second vertical direction included angle;
s5: determining the position correction value according to the horizontal deviation included angle and the vertical deviation included angle, and specifically determining the position correction value in the horizontal direction according to the horizontal deviation included angle; and determining a position correction value in the vertical direction according to the vertical deviation included angle.
For example, if the user opens the front camera and starts face recognition, it may be determined that the user wants to take a picture or record a video, at this time face detection may be triggered, if the face is detected, it indicates that the user uses the front camera of the mobile phone to take a picture or record a video, at this time, depth detection may be started, the face in the image is numbered, Si. after the face Fi and the camera are numbered is obtained, and then according to the angle schematic diagram shown in fig. 2, angles a and B of line-of-sight deviation of each faces may be calculated by the distance Si and the distance Ki between the front camera of the mobile terminal and the corresponding face Fi in the screen.
In fig. 2, the interface schematic diagram is an interface schematic diagram for performing self-shooting through a front camera of a mobile phone, during the shooting process, the position of a human face and the position of eyes of a person can be identified, and a horizontal linear distance between the person and the mobile phone and a linear distance between the eyes and the camera can be determined, and a linear distance between the camera and the eyes in an imaging interface can be determined, wherein in fig. 2, S represents a distance between the eyes and the camera, and K represents a distance between the camera and the eyes in the imaging interface.
After the horizontal offset angle and the vertical offset angle are determined, the determined horizontal offset angle and vertical offset angle may be used as the position correction amount.
In embodiments, the eyes and the pupils may be identified by face recognition and object recognition technology, then the image of the eyes is processed to split the image to obtain the pupil, and then the pupils are moved according to the horizontal offset angle and the vertical offset angle obtained by the above calculation, so as to obtain the image with the adjusted pupil position.
Specifically, as shown in fig. 3, the circle of the dotted line in the figure is the pupil position before adjustment, and the eye circle of the solid line in the figure is the pupil position after adjustment, so that it can be seen that the pupil position is located in the middle of the eye, thereby achieving the angle adjustment of the view focus.
Step , considering that the image adjustment process may sometimes feel unnatural due to light and other problems, in order to make the overall image effect better, the pupil position adjustment can be completed and then the light reflection of the pupil can be finely adjusted, so that the adjusted image is more vivid.
Step 103: and displaying the corrected imaging image.
The corrected imaging image can be displayed on a display screen of the terminal device, and specifically, in the actual implementation process, the terminal device can continuously correct the viewing angle in the process that the user opens the camera until the user finishes shooting.
That is, in the above example, by determining the distance between the face and the camera, the viewing angle of the eye (i.e., adjusting the position of the pupil) is dynamically calculated and adjusted, so that the user can approach the experience of mirror imaging when previewing a picture in the process of taking a picture or taking a picture, and a better self-photographing effect is obtained.
The image correction display mode for pupil position adjustment may be triggered only when a preset correction trigger condition is met, for example, it is determined that a user takes a picture through a front camera or a correction switch is turned on, as shown in fig. 4, trigger switches may be set on a self-photographing interface, and the user may select whether to adjust the pupil position by himself/herself through the trigger switches, so that flexibility may be effectively improved and user experience may be improved.
The imaging image may be a photograph or a video, which is not limited in this application and may be selected and set according to the actual situation.
The image rectification display method in the above example needs to rely on depth detection, i.e. determining the position between the user and the camera, wherein the depth detection is to detect the distance between the photographed object and the camera (i.e. depth), when implemented, the depth detection can be performed in of the following three ways:
1) binocular range finding detection method, i.e. binocular matching (dual RGB camera + optional illumination system) detection method
The method relies on the principle of triangulation, which is the inverse proportional relationship of the difference (i.e. parallax Disparity) between the abscissas of the imaging of the target point in the left and right two views to the distance of the target point to the imaging plane: and Z is ft/d, so that the depth information can be calculated. The triangulation principle is based entirely on image processing techniques, which obtain depth values by finding the same feature points in both images to obtain matching points. In the binocular range finding, the light source can be selected from an uncoded light source such as ambient light or white light, and the image recognition depends on the characteristic point of the photographed object.
2) Structured light based detection method, i.e., RGB cameras + structured light projector (infrared) + structured light depth sensor (CMOS)):
the structured light ranging method is characterized in that a projection light source is coded, namely the projection light source is characterized, so that a shot object is an image projected onto an object by the coded light source and is modulated by the depth of the surface of the object, pre-designed patterns are projected to serve as reference images (coded light sources), structured light is projected onto the surface of the object, however, a video camera is used for receiving the structured light patterns reflected by the surface of the object, so that two images can be obtained, wherein are the pre-designed reference images, and are the structured light patterns reflected by the surface of the object and acquired by the video camera, and the received patterns can be deformed due to the stereo shape of the object, so that the spatial information of the surface of the object can be calculated through the position and the deformation degree of the patterns on the video camera, and the ranging is achieved.
3) Light coding (laser speckle Light source) ranging method:
the distance measuring method is different from structured Light, and a Light coding Light source is 'laser speckle', which is diffraction spots formed randomly after laser irradiates a rough object or penetrates ground glass, wherein the speckles have high randomness and can change patterns along with the difference of distance, namely, the speckle patterns at any two positions in a space are different, as long as the Light is applied to the space, the whole space can be marked, if objects are placed in the space, the positions of the objects can be determined as long as the speckle patterns on the objects are seen, and the distance measuring method needs to record the speckle patterns in the whole space, namely, times of Light source calibration is needed.
Specifically adopt which kind of mode to realize the range finding, can require the selection according to actual precision and complexity, this application does not limit to this.
There are also provided embodiments of methods for short message delivery, according to the invention, wherein the steps illustrated in the flowchart are performed in a computer system such as set of computer executable instructions, and wherein the logical order illustrated in the flowchart is different than that illustrated and described in some cases.
For example, the mobile terminal 10 may include or more (only are shown in the figure) processors 102 (the processors 102 may include but are not limited to processing devices such as a microprocessor MCU or a programmable logic device FPGA, etc.), a memory 104 for storing data, and a transmission module 106 for communication functions, it is understood by those skilled in the art that the structure shown in fig. 5 is only an illustration and does not limit the structure of the electronic device described above.
The memory 104 may be used to store software programs and modules of application software, such as program instructions/modules corresponding to the image rectification method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by executing the software programs and modules stored in the memory 104, so as to implement the image rectification method of the application program, the memory 104 may include a high-speed random access memory, and may further include a non-volatile memory, such as or more magnetic storage devices, a flash memory, or other non-volatile solid-state memories.
The transmission module 106 is used for receiving or transmitting data via networks, the above-mentioned specific examples of the networks may include a wireless Network provided by a communication provider of the computer terminal 10, in examples, the transmission module 106 includes Network adapters (NICs) that may be connected to other Network devices through a base station so as to communicate with the internet, and in examples, the transmission module 106 may be a Radio Frequency (RF) module for communicating with the internet in a wireless manner.
In terms of software, the image rectification device may be as shown in fig. 6, and includes: an acquisition module 601, a rectification module 602, and a display module 603, wherein:
an obtaining module 601, configured to obtain a facial image of a target user by using a front camera assembly of a mobile terminal;
a correction module 602, configured to correct a position of a pupil image in the face image when it is determined that a distance between a target user and the front camera assembly is smaller than a preset threshold, so that the pupil image is located at a center position of an eye region of the face image;
a display module 603 for displaying the corrected face image.
In embodiments, the obtaining module 601 may specifically determine a position correction value according to the following steps, and correct the pupil position of the target user in the facial image according to the position correction value:
s1, determining a horizontal direction included angle and a vertical direction included angle between the eyes of the target user and the camera;
s2: determining a second horizontal direction included angle and a second vertical direction included angle between the eyes of the target user and the positions of the eyes in the face image;
s3, determining a horizontal deviation included angle according to the included angle in the horizontal direction and the included angle in the second horizontal direction;
s4, determining a vertical deviation included angle according to the vertical direction included angle and the second vertical direction included angle;
s5: and determining the position correction value according to the horizontal deviation included angle and the vertical deviation included angle.
Specifically, determining the position correction value according to the horizontal deviation included angle and the vertical deviation included angle may include: determining a position correction value in the horizontal direction according to the horizontal deviation included angle; and determining a position correction value in the vertical direction according to the vertical deviation included angle.
In embodiments, determining the horizontal direction angle and the vertical direction angle between the eyes of the target user and the camera may include obtaining a distance between the eyes of the target user and the camera, obtaining a second distance between the camera and the position of the eyes of the user imaged in the screen, and determining the horizontal direction angle and the vertical direction angle according to the distance and the second distance.
In embodiments, the correction module 602 may specifically split an image of the exit pupil position from the facial image, and move the split image of the pupil position according to the position correction value to correct the pupil position of the target user.
In embodiments, the obtaining module 601 may specifically determine whether a preset correction condition is satisfied, and in the case that the preset correction condition is determined to be satisfied, determine the position correction value.
In embodiments, the preset calibration condition may include, but is not limited to, at least that the calibration switch is turned on when the front camera is used for shooting.
In the embodiments, after the pupil position of the target user in the face image is corrected according to the position correction value, the reflex of the corrected pupil position may also be adjusted.
The facial image may include, but is not limited to pictures, videos.
The mobile terminal may be a terminal device or software used by a client. Specifically, the client may be a terminal device such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart watch, or other wearable devices. Of course, the mobile terminal may be software that can run in the terminal device. For example: and the mobile phone is applied to application software such as a Taobao, a Paobao or a browser.
According to the image correction method, the pupil position of the target user in the facial image is corrected through the determined position correction value, and the corrected facial image is displayed, so that the problem that the sight of the user is not correct in a preview interface when the user focuses the sight on a screen of a terminal to preview a picture is solved, and the technical effect that the deviation of the sight of the user is reduced when the user previews the picture in the self-photographing process so as to improve the user experience is achieved.
The steps recited in the embodiments are merely types of execution sequences of numerous steps and do not represent -only execution sequences of steps.
For convenience of description, the above devices are described as being divided into various modules by functions, and the functions of the modules can be implemented in or more pieces of software and/or hardware when implementing the application.
The methods, apparatus or modules described herein may be implemented in computer readable program code to a controller implemented in any suitable manner, e.g., the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic , a switch, an Application Specific Integrated Circuit (ASIC), a programmable logic controller and an embedded microcontroller, examples of the controller including, but not limited to, microcontrollers ARC 625D, Atmel AT91SAM, Microchip 18F26K20 and silicon Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory, it is also known to those skilled in the art that, apart from implementing the controller in pure computer readable program code, the controller may be fully programmed by logic to cause the controller to implement the same functions as hardware components of logic , switches, Application Specific Integrated circuits, microcontroller and embedded microcontroller, etc., and thus the various internal hardware components of the apparatus may be considered as hardware components implementing the same, or even hardware components for implementing the same functions of the apparatus.
Portions of the modules in the apparatus described herein may be described in the general context of computer-executable instructions that are executed by a computer, such as program modules , which include routines, programs, objects, components, data structures, classes, and the like that perform particular tasks or implement particular abstract data types.
Based on the understanding that the present disclosure, in essence or in part contributing to the prior art, may be embodied in the form of a software product or may be embodied during the implementation of data migration, the computer software product may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing computer devices (which may be personal computers, mobile terminals, servers, or network devices, etc.) to execute the methods described in the various embodiments or parts of the embodiments.
The embodiments in the present specification are described in a progressive manner, and the same or similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. All or portions of the present application are operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, mobile communication terminals, multiprocessor systems, microprocessor-based systems, programmable electronic devices, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
While the present application has been described with examples, those of ordinary skill in the art will appreciate that there are numerous variations and permutations of the present application without departing from the spirit of the application, and it is intended that the appended claims encompass such variations and permutations without departing from the spirit of the application.

Claims (13)

1, A method for image rectification, the method comprising:
acquiring a facial image of a target user by using a front camera assembly of the mobile terminal;
when the distance between a target user and the front camera shooting assembly is determined to be smaller than a preset threshold value, correcting the position of a pupil image in the face image to enable the pupil image to be located at the center of an eye area of the face image;
the corrected face image is displayed.
2. The method of claim 1, wherein correcting the position of the pupil image in the face image comprises:
determining a position correction value;
and correcting the pupil position of the target user in the facial image according to the position correction value.
3. The method of claim 2, wherein determining a position correction value comprises:
determining an th horizontal direction angle and a th vertical direction angle between the eyes of the target user and the camera assembly;
determining a second horizontal direction included angle and a second vertical direction included angle between the eyes of the target user and the positions of the eyes in the face image;
determining a horizontal deviation included angle according to the th horizontal direction included angle and the second horizontal direction included angle;
determining a vertical offset included angle according to the th vertical direction included angle and the second vertical direction included angle;
and determining the position correction value according to the horizontal deviation included angle and the vertical deviation included angle.
4. The method of claim 3, wherein determining the position correction value based on the horizontal offset angle and the vertical offset angle comprises:
determining a position correction value in the horizontal direction according to the horizontal deviation included angle;
and determining a position correction value in the vertical direction according to the vertical deviation included angle.
5. The method of claim 3, wherein determining the th horizontal angle and the th vertical angle between the target user's eyes and the camera assembly comprises:
acquiring th distance between eyes of the target user and the camera;
acquiring a second distance between the camera and the eye position of the user imaged in the screen;
and determining the horizontal included angle and the vertical included angle according to the th distance and the second distance.
6. The method of claim 2, wherein correcting the pupil position of the target user in the facial image according to the position correction value comprises:
splitting an image of an exit pupil location from the facial image;
and moving the split image of the pupil position according to the position correction value to correct the pupil position of the target user.
7. The method of claim 2, wherein determining a position correction value comprises:
determining whether a preset correction condition is met;
in the case where it is determined that the preset correction condition is satisfied, the position correction value is determined.
8. The method of any of claims 1-7, wherein after correcting for the target user's pupil position in the facial image, the method further comprises:
and adjusting the reflection of the corrected pupil position.
9. The method of , wherein the facial image comprises pictures, videos.
10, a mobile device, comprising:
the camera shooting assembly is used for acquiring a face image of a user;
a display component for displaying the acquired face image in real time;
a processing unit coupled to the display component for correcting a position of a pupil image in the face image such that the pupil image is located at a center position of an eye region of the face image.
An image rectification method of , characterized in that the method includes:
the method comprises the steps that a face image of a front camera is obtained under the condition that the front camera component of the mobile terminal is started and the distance between a target user and the camera is smaller than a preset threshold value;
correcting a pupil position of a target user in the facial image;
the corrected face image is displayed.
A terminal device comprising a processor and a memory for storing processor-executable instructions that when executed by the processor implement the steps of the method of any of claims 1-9 to .
13, computer readable storage medium having stored thereon computer instructions which, when executed, implement the steps of the method of any of claims 1-9, .
CN201810788605.XA 2018-07-18 2018-07-18 image correction method, mobile device and terminal device Pending CN110740246A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810788605.XA CN110740246A (en) 2018-07-18 2018-07-18 image correction method, mobile device and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810788605.XA CN110740246A (en) 2018-07-18 2018-07-18 image correction method, mobile device and terminal device

Publications (1)

Publication Number Publication Date
CN110740246A true CN110740246A (en) 2020-01-31

Family

ID=69233699

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810788605.XA Pending CN110740246A (en) 2018-07-18 2018-07-18 image correction method, mobile device and terminal device

Country Status (1)

Country Link
CN (1) CN110740246A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733794A (en) * 2021-01-22 2021-04-30 腾讯科技(深圳)有限公司 Method, device and equipment for correcting sight of face image and storage medium
CN113038172A (en) * 2014-09-09 2021-06-25 弗劳恩霍夫应用研究促进协会 Audio splicing concept
CN113642364A (en) * 2020-05-11 2021-11-12 华为技术有限公司 Face image processing method, device and equipment and computer readable storage medium
CN114025087A (en) * 2021-10-29 2022-02-08 北京字跳网络技术有限公司 Video shooting method, device, storage medium and program product
WO2022156622A1 (en) * 2021-01-22 2022-07-28 腾讯科技(深圳)有限公司 Sight correction method and apparatus for face image, device, computer-readable storage medium, and computer program product
CN113642364B (en) * 2020-05-11 2024-04-12 华为技术有限公司 Face image processing method, device, equipment and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1725976A (en) * 2002-11-21 2006-01-25 托比伊科技公司 Method and installation for detecting and following an eye and the gaze direction thereof
CN101930543A (en) * 2010-08-27 2010-12-29 南京大学 Method for adjusting eye image in self-photographed video
CN103809737A (en) * 2012-11-13 2014-05-21 华为技术有限公司 Method and device for human-computer interaction
US9256974B1 (en) * 2010-05-04 2016-02-09 Stephen P Hines 3-D motion-parallax portable display software application
CN105430269A (en) * 2015-12-10 2016-03-23 广东欧珀移动通信有限公司 Shooting method and apparatus applied to mobile terminal
CN107277375A (en) * 2017-07-31 2017-10-20 维沃移动通信有限公司 A kind of self-timer method and mobile terminal
CN111445413A (en) * 2020-03-27 2020-07-24 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1725976A (en) * 2002-11-21 2006-01-25 托比伊科技公司 Method and installation for detecting and following an eye and the gaze direction thereof
US9256974B1 (en) * 2010-05-04 2016-02-09 Stephen P Hines 3-D motion-parallax portable display software application
CN101930543A (en) * 2010-08-27 2010-12-29 南京大学 Method for adjusting eye image in self-photographed video
CN103809737A (en) * 2012-11-13 2014-05-21 华为技术有限公司 Method and device for human-computer interaction
CN105430269A (en) * 2015-12-10 2016-03-23 广东欧珀移动通信有限公司 Shooting method and apparatus applied to mobile terminal
CN107277375A (en) * 2017-07-31 2017-10-20 维沃移动通信有限公司 A kind of self-timer method and mobile terminal
CN111445413A (en) * 2020-03-27 2020-07-24 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113038172A (en) * 2014-09-09 2021-06-25 弗劳恩霍夫应用研究促进协会 Audio splicing concept
CN113038172B (en) * 2014-09-09 2023-09-22 弗劳恩霍夫应用研究促进协会 Audio data stream splicing and broadcasting method, audio decoder and audio decoding method
CN113642364A (en) * 2020-05-11 2021-11-12 华为技术有限公司 Face image processing method, device and equipment and computer readable storage medium
WO2021227988A1 (en) * 2020-05-11 2021-11-18 华为技术有限公司 Face image processing method, apparatus and device, and computer readable storage medium
CN113642364B (en) * 2020-05-11 2024-04-12 华为技术有限公司 Face image processing method, device, equipment and computer readable storage medium
CN112733794A (en) * 2021-01-22 2021-04-30 腾讯科技(深圳)有限公司 Method, device and equipment for correcting sight of face image and storage medium
CN112733794B (en) * 2021-01-22 2021-10-15 腾讯科技(深圳)有限公司 Method, device and equipment for correcting sight of face image and storage medium
WO2022156622A1 (en) * 2021-01-22 2022-07-28 腾讯科技(深圳)有限公司 Sight correction method and apparatus for face image, device, computer-readable storage medium, and computer program product
CN114025087A (en) * 2021-10-29 2022-02-08 北京字跳网络技术有限公司 Video shooting method, device, storage medium and program product
WO2023071569A1 (en) * 2021-10-29 2023-05-04 北京字跳网络技术有限公司 Video caputring method, device, storage medium and program product

Similar Documents

Publication Publication Date Title
US10455141B2 (en) Auto-focus method and apparatus and electronic device
CN110740246A (en) image correction method, mobile device and terminal device
US11860511B2 (en) Image pickup device and method of tracking subject thereof
US10469821B2 (en) Stereo image generating method and electronic apparatus utilizing the method
CN108718373B (en) Image device
EP2328125A1 (en) Image splicing method and device
WO2014103094A1 (en) Information processing device, information processing system, and information processing method
US9160931B2 (en) Modifying captured image based on user viewpoint
US10754420B2 (en) Method and device for displaying image based on virtual reality (VR) apparatus
KR20170089260A (en) Apparatus and Method for Generating 3D Face Model using Mobile Device
US20140368695A1 (en) Control device and storage medium
EP3544286B1 (en) Focusing method, device and storage medium
JP2009200560A (en) Imaging apparatus, imaging method, and program
TW201541141A (en) Auto-focus system for multiple lens and method thereof
TWI637288B (en) Image processing method and system for eye-gaze correction
WO2016155227A1 (en) Method and apparatus for displaying viewfinding information
CN114363522A (en) Photographing method and related device
CN113870213A (en) Image display method, image display device, storage medium, and electronic apparatus
TW201541143A (en) Auto-focus system for multiple lens and method thereof
US20130293682A1 (en) Image capture device, image capture method, and program
CN111726515A (en) Depth camera system
TW201642008A (en) Image capturing device and dynamic focus method thereof
US9762891B2 (en) Terminal device, image shooting system and image shooting method
US11562496B2 (en) Depth image processing method, depth image processing apparatus and electronic device
CN107948522B (en) Method, device, terminal and storage medium for selecting shot person head portrait by camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200131

RJ01 Rejection of invention patent application after publication