CN112911148B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN112911148B
CN112911148B CN202110120518.9A CN202110120518A CN112911148B CN 112911148 B CN112911148 B CN 112911148B CN 202110120518 A CN202110120518 A CN 202110120518A CN 112911148 B CN112911148 B CN 112911148B
Authority
CN
China
Prior art keywords
image
user
target
pupil
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110120518.9A
Other languages
Chinese (zh)
Other versions
CN112911148A (en
Inventor
李硕
程金鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110120518.9A priority Critical patent/CN112911148B/en
Publication of CN112911148A publication Critical patent/CN112911148A/en
Application granted granted Critical
Publication of CN112911148B publication Critical patent/CN112911148B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Abstract

The embodiment of the application discloses an image processing method, an image processing device and electronic equipment, wherein the method comprises the following steps: acquiring a first image through a rear camera, wherein the shooting direction of the rear camera is consistent with the sight line direction of a user; processing the first image according to the characteristic parameters to obtain a target image; the characteristic parameters are determined according to a second image shot by the front camera, and the second image comprises an image in the pupil of the user. According to the embodiment of the application, the problem that a user needs to perform more complicated post-processing in order to enable the shot image to be close to the scene seen by eyes can be solved.

Description

Image processing method and device and electronic equipment
Technical Field
The embodiment of the application relates to the field of information processing, in particular to an image processing method and device and an electronic device.
Background
At present, more and more people are keen on shooting by electronic devices. However, the real shooting environment is often complex, and many factors cause the shot image to be greatly different from the scene seen by the eyes of the user. Such as the photographing light, the photographing angle, and the distance from the subject. If a scene seen by eyes is desired, the user also needs to perform post-processing on the captured image.
In the process of implementing the present application, the applicant finds that at least the following problems exist in the prior art:
in order to make the captured image approach the scene seen by the eyes, the user needs to go through a relatively complicated post-processing.
Disclosure of Invention
The embodiment of the application provides an image processing method and device and electronic equipment, and can solve the problem that a user needs to perform relatively complicated post-processing in order to enable a shot image to be close to a scene seen by eyes.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, where the method may include:
acquiring a first image through a rear camera, wherein the shooting direction of the rear camera is consistent with the sight line direction of a user;
processing the first image according to the characteristic parameters to obtain a target image;
the characteristic parameters are determined according to a second image shot by the front camera, and the second image comprises an image in the pupil of the user.
In a second aspect, an embodiment of the present application provides an image processing apparatus, which may include:
the shooting module is used for acquiring a first image through the rear camera, and the shooting direction of the rear camera is consistent with the sight line direction of a user;
the processing module is used for processing the first image according to the characteristic parameters to obtain a target image; the characteristic parameters are determined according to a second image shot by the front camera, and the second image comprises an image in the pupil of the user.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, the first image is shot through the rear camera, and then the first image is processed according to the characteristic parameters to obtain the target image. Since the shooting direction of the rear camera is consistent with the sight line direction of the user, the imaging in the user's pupil in the second image shot by the front camera can characterize the scene seen by the user's eyes. Therefore, the target image obtained by processing the first image according to the characteristic parameters determined by the second image is almost different from the scene actually seen by the user, namely, the effect of 'seeing and getting' can be realized.
Drawings
The present application may be better understood from the following description of specific embodiments of the application taken in conjunction with the accompanying drawings, in which like or similar reference numerals identify like or similar features.
Fig. 1 is a schematic view of an application scenario of an image processing method according to an embodiment of the present application;
fig. 2 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram for displaying imaging in a pupil of a user according to an embodiment of the present application;
fig. 4 is a schematic diagram for displaying a target object according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a focus area according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram for displaying a pupil shift of a user according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 9 is a schematic hardware structure diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image processing method provided by the embodiment of the present application can be applied to at least the following application scenarios, which are described below.
In view of the problems in the related art, embodiments of the present application provide an image processing method and apparatus, an electronic device, and a storage medium, which can solve the problem in the related art that a difference between a captured image and a scene seen by a user is large.
According to the method provided by the embodiment of the application, the first image is shot through the rear camera, and then the first image is processed according to the characteristic parameters to obtain the target image. Since the shooting direction of the rear camera is consistent with the sight line direction of the user, the imaging in the user's pupil in the second image shot by the front camera can characterize the scene seen by the user's eyes. Therefore, the target image obtained by processing the first image according to the characteristic parameters determined by the second image is almost different from the scene actually seen by the user, namely, the effect of 'seeing and getting' can be realized.
The position relationship between the rear camera and the front camera can be as shown in fig. 1.
Based on the application scenario, the following describes in detail the image processing method provided in the embodiment of the present application.
Fig. 2 is a flowchart of an image processing method according to an embodiment of the present application.
As shown in fig. 2, the image processing method may include steps 210 to 220, and the method is applied to an image processing apparatus, and specifically as follows:
and step 210, acquiring a first image through a rear camera, wherein the shooting direction of the rear camera is consistent with the sight line direction of a user.
Step 220, processing the first image according to the characteristic parameters to obtain a target image; the characteristic parameters are determined according to a second image shot by the front camera, and the second image comprises an image in the pupil of the user.
According to the image processing method provided by the embodiment of the application, the first image is shot through the rear camera, and then the first image is processed according to the characteristic parameters to obtain the target image. Since the shooting direction of the rear camera is consistent with the sight line direction of the user, the imaging in the user's pupil in the second image shot by the front camera can characterize the scene seen by the user's eyes. Therefore, the target image obtained by processing the first image according to the characteristic parameters determined by the second image is almost different from the scene actually seen by the user, namely, the effect of 'seeing and getting' can be realized.
The following describes the contents of steps 210 to 220:
first, step 210 is involved.
The shooting direction of the rear camera is consistent with the sight direction of the scenery observed by the user.
Next, step 220 is involved.
The characteristic parameters are determined according to a second image shot by the front camera, and the second image comprises an image in the pupil of the user; the shooting direction of the rear camera is consistent with the sight line direction of the user. Wherein the user is a photographer. In a possible embodiment, before step 220, the following steps may be further included:
as shown in fig. 3, first, a second image is taken through the front camera, the second image including an image of a scene observed by the user in the pupil of the user; secondly, identifying imaging in the user's pupil in the second image; finally, characteristic parameters are determined from the imaging in the user's pupil. And subsequently, the first image obtained by shooting by the rear camera can be subjected to image processing according to the characteristic parameters.
The front camera involved in the above may be a front camera of an electronic device.
In a possible embodiment, before step 220, the following steps may be further included:
extracting color parameters from the image in the user's pupil; the color parameter is determined as a characteristic parameter.
Since the second image comprises an image of the scene observed by the user in the user's pupil, color parameters can be extracted from the image in the user's pupil, which are color parameters of the scene that the user's eyes really see.
After extracting the color parameter, the first image may be rendered according to the color parameter, and the color of the obtained target image substantially approaches the color of the scene seen by the user's eyes.
For example, during the process of taking the first image by using the rear camera, the electronic device is in the backlight, so the taken first image is usually dark, if the scene seen by the user is emerald trees, but the trees in the taken first image may be dark green. At this time, the color parameter can be determined according to the imaging in the pupil of the user, and then the first image is rendered according to the color parameter, so that the processed target image can approach to the scene really seen by the user, and the quality of the shot image can be improved.
In another possible embodiment, before step 220, the following steps may be further included:
identifying a target object in imaging in a pupil of a user; determining a volume parameter of the target object in the imaging of the user's pupil; under the condition that the volume parameter of the target object is larger than a preset volume parameter threshold value, determining the volume parameter of the target object as a characteristic parameter; determining a first focal length according to the volume parameter; and controlling the rear camera to shoot the target object at the first focal length to obtain a plurality of first images.
If the user is farther away from the scene, the viewing range is wider, but the camera may capture only a part of the scene, so in order to meet the user's expectation, the target object in the image in the user's pupil in the second image can be identified by a pupil identification technology, and the characteristic parameter is determined according to the volume parameter of the target object for synthesizing the target image according to the characteristic parameter.
If the volume parameter of the target object is greater than the preset volume parameter threshold, it means that if the target object is directly photographed, the obtained image may not include the full view of the target object. Therefore, the first focal length can be determined according to the volume parameter, that is, when the volume parameter is larger than the preset volume parameter threshold, the target object is photographed by adopting a smaller focal length to obtain a plurality of first images, and then the plurality of first images are synthesized to obtain an image including the full view of the target object. Here, the target object on which the user's pupils are focused may be determined by recognizing the imaging in the user's pupils, and in the case where the volume parameter of the target object is greater than the preset volume parameter threshold, that is, the view finder of the first current camera may not include the full view of the target object. The first focal distance may then be determined from the volume parameter of the target object. And then, controlling the front camera to shoot the target object at the first focal length to obtain a plurality of first images. The first image is used for subsequent synthesis of the target image.
In another possible embodiment, step 220 may specifically include the following steps:
and synthesizing the plurality of first images according to the characteristic parameters to obtain the target image.
The process of synthesizing the plurality of first images according to the characteristic parameters is similar to the effect of panoramic shooting. The method comprises the steps of shooting a plurality of first images from left to right, and then synthesizing and fusing the plurality of first images according to the volume parameters of a target object, wherein the view range of the obtained target image is larger than that of the first image.
For example, as shown in fig. 4, for example, if the user wants to photograph a target building (i.e., a target object), since the target building is large, in order to photograph a panorama of the target building, the user may need to continuously go backwards to find a suitable photographing location or manually adjust a focal distance in order to photograph the panorama of the target building.
Based on the above embodiments, it is possible to determine a target building (i.e. a target object) on which the user's pupil is focused based on the imaging in the user's pupil, and determine the first focal length according to the volume parameter of the target building. And then, controlling the front camera to shoot the target building at the first focal length to obtain a plurality of first images comprising the target building, wherein the second image can shoot the full view of the first image. And finally, synthesizing the plurality of first images according to the characteristic parameters to obtain the target image.
From this, can confirm the volume parameter through the formation of image in the user's pupil, then carry out automatic adjustment to first image according to the volume parameter to obtain the target image that accords with user's expectation more, can reduce the loaded down with trivial details operation of shooting in-process, improve the simple operation nature and the efficiency of shooing, promote user experience.
In yet another possible embodiment, before step 220, the following steps may be further included:
identifying a focusing region in imaging in a user pupil, wherein the focusing region is a region in which the user pupil is focused; the characteristic parameter is determined from the position of the focus area in the second image.
The in-focus region in the imaging in the user's pupil may be tracked based on pupil identification techniques to determine whether the user's pupil is in focus in a certain region. The step of determining the characteristic parameter according to the position of the focus area in the second image may specifically include: and determining the position of the focus area in the second image, wherein the position can be the coordinate of the focus area in the second image, and then determining the characteristic parameter according to the position of the focus area in the second image, namely the coordinate range covered by the focus area in the second image.
In another possible embodiment, step 220 may specifically include the following steps:
amplifying an image area at least comprising a focusing area in the first image according to the characteristic parameters until a preset amplification condition is met, and obtaining a target image;
wherein, the preset amplifying condition comprises:
receiving target input of a user; or the enlarged image area meets the preset area range.
As shown in fig. 5, in the case that the focus area in the imaging in the pupil of the user is identified, and the characteristic parameter is determined according to the position of the focus area in the first image, the image area at least including the focus area in the first image may be slowly enlarged at a preset speed according to the characteristic parameter until a preset enlargement condition is satisfied, and the target image is obtained. The automatic enlargement of the image area including at least the focus area in the first image according to the characteristic parameter may replace the user's manual focusing operation for enlargement. The preset speed may be a speed preset by a user or a default speed of the electronic device.
The user can then check whether the magnified image area achieves the desired magnification effect, and if so. The electronic device may end this magnification operation in response to a user's target input (closing the eye for 3 seconds, or in response to a user input to a "stop magnifying" control); or automatically stopping the amplification operation under the condition that the amplified image area meets the preset area range.
Therefore, the focusing area in the imaging of the pupils of the user can be identified, then the image area at least comprising the focusing area in the first image is automatically amplified according to the characteristic parameters until the preset amplification condition is met to obtain the target image, so that the target image expected by the user can be obtained, the tedious operations in the shooting process can be reduced, the operation convenience and efficiency of shooting are improved, and the user experience is improved.
In another possible embodiment, the second image is a plurality of images, and before step 220, the method further includes the following steps:
determining the deviation direction and the deviation distance of the user pupil relative to a preset reference point according to the images of the user pupil in the plurality of second images; determining characteristic parameters according to the deviation direction and the deviation distance;
determining a second focal length according to the characteristic parameters; and controlling the front camera to shoot at the second focal length to obtain a plurality of second images.
During the process of capturing the first image, the user's view may drift, i.e., the scene seen by the user may expand with the drift of the user's view. As shown in fig. 6, the front camera captures a plurality of second images, and the pupils of the user in the second images drift left and right, which indicates that the user may see a scene with a larger range than the mirror image captured by the rear camera.
Here, the imaging in the user's pupil may be included in each of the plurality of second images by taking the plurality of second images. Then, according to the imaging in the pupils of the users, determining the deviation direction (angle taking the preset direction as a reference) and the deviation distance (distance relative to the preset reference) of the pupils of the users relative to the preset reference point; then, the process is carried out. And determining characteristic parameters according to the deviation direction and the deviation distance. For example, the shift is 20 degrees compared to the direction of the midline of the face, and 0.02 mm compared to the center of the pupil.
Next, the user's sight line range, i.e., the boundaries in the respective directions of up, down, left, and right, may be determined based on the characteristic parameters. And then determining a second focal length according to the sight range of the user, and controlling the front-facing camera to shoot at the second focal length to obtain a plurality of second images.
In still another possible embodiment, step 220 may specifically include the following steps:
determining a plurality of third images from the plurality of first images according to the characteristic parameters; and synthesizing the plurality of third images to obtain the target image.
The plurality of third images may be determined from the plurality of first images according to the user's sight range determined by the characteristic parameter, and the target image may be obtained by synthesizing the plurality of third images. For example, the serial numbers of the multiple third images from left to right are 1-100 respectively; and determining a third image with the sequence number of 30-80 from the plurality of first images according to the sight range of the user, so that the scene range seen by the sight of the user can be accurately captured, and each third image in the scene range seen by the sight of the user is synthesized to obtain the target image.
Therefore, the target image can be obtained by determining the characteristic parameters according to the deviation direction and the deviation distance of the pupil of the user relative to the preset reference point, and performing synthesis processing on the characteristic parameters according to the third image screened from the first image. So as to obtain the target image which is really seen by the user, and realize the image display effect of 'seeing and getting'.
In summary, in the embodiment of the present application, a first image is captured by a rear camera, and then the first image is processed according to the characteristic parameters to obtain a target image. Since the shooting direction of the rear camera is consistent with the sight line direction of the user, the images in the user's pupils in the second image shot by the front camera can characterize the scene seen by the user's eyes. Therefore, the target image obtained by processing the first image according to the characteristic parameters determined by the second image is almost different from the scene actually seen by the user, namely, the effect of 'seeing and getting' can be realized.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the loaded image processing method. In the embodiment of the present application, an image processing apparatus executes a loaded image processing method as an example, and the image processing method provided in the embodiment of the present application is described.
In addition, based on the image processing method, an embodiment of the present application further provides an image processing apparatus, which is specifically described in detail with reference to fig. 7.
Fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
As shown in fig. 7, the image processing apparatus 700 may include:
the shooting module 710 is used for acquiring a first image through a rear camera;
the processing module 720 is configured to process the first image according to the characteristic parameters to obtain a target image; the characteristic parameters are determined according to a second image shot by the front camera, and the second image comprises an image in the pupil of the user; the shooting direction of the front camera is consistent with the sight line direction of the user.
In a possible embodiment, the image processing apparatus 700 may further include:
and the extraction module is used for extracting the color parameters from the images in the pupils of the user.
A first determining module for determining the color parameter as the characteristic parameter.
In a possible embodiment, the image processing apparatus 700 may further include:
a first identification module to identify a target object in imaging in a pupil of a user.
A second determination module for determining a volume parameter of the target object in the imaging in the pupil of the user.
And the second determination module is further used for determining the volume parameter of the target object as the characteristic parameter under the condition that the volume parameter is larger than the preset volume parameter threshold.
And the second determination module is also used for determining the first focal length according to the volume parameter.
The first control module is used for controlling the rear camera to shoot the target object at the first focal length to obtain at least one first image.
In a possible embodiment, the processing module is specifically configured to: and synthesizing the plurality of first images according to the characteristic parameters to obtain the target image.
In a possible embodiment, the image processing apparatus 700 may further include:
and the second identification module is used for identifying a focusing area in the imaging of the pupil of the user, wherein the focusing area is an area focused by the pupil of the user.
And the third determining module is used for determining the characteristic parameters according to the position of the focus area in the second image.
In one possible embodiment, the processing module 720 includes:
and the amplifying module is used for amplifying an image area at least comprising the focusing area in the first image according to the characteristic parameters until a preset amplifying condition is met, so that a target image is obtained.
Wherein, the preset amplifying condition comprises: receiving target input of a user; or the enlarged image area meets the preset area range.
In a possible embodiment, the second image is a plurality of images, and the image processing apparatus 700 may further include:
and the fourth determining module is used for determining the deviation direction and the deviation distance of the user pupil relative to the preset reference point according to the images of the user pupil in the plurality of second images.
And the fourth determining module is also used for determining the characteristic parameters according to the deviation direction and the deviation distance.
And the fourth determining module is further used for determining the second focal length according to the characteristic parameters.
And the second control module is also used for controlling the front camera to shoot at a second focal length to obtain a plurality of second images.
In one possible embodiment, the processing module 720 includes:
and the fourth determining module is further used for determining a plurality of third images from the plurality of first images according to the characteristic parameters.
The processing module 720 is specifically configured to perform synthesis processing on the multiple third images to obtain a target image.
In summary, the image processing apparatus provided in the embodiment of the present application captures a first image through a rear camera, and then processes the first image according to the characteristic parameters to obtain a target image. Since the shooting direction of the rear camera is consistent with the sight line direction of the user, the imaging in the user's pupil in the second image shot by the front camera can characterize the scene seen by the user's eyes. Therefore, the target image obtained by processing the first image according to the characteristic parameters determined by the second image is almost different from the scene actually seen by the user, namely, the effect of 'seeing and getting' can be realized.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the image processing apparatus in the method embodiments of fig. 2 to fig. 6, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 8, an electronic device 800 is further provided in the embodiment of the present application, and includes a processor 801, a memory 802, and a program or an instruction that is stored in the memory 802 and is executable on the processor 801, where the program or the instruction is executed by the processor 801 to implement each process of the foregoing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 9 is a schematic hardware structure diagram of another electronic device according to an embodiment of the present application.
The electronic device 900 includes, but is not limited to: a radio frequency unit 901, a network module 902, an audio output unit 903, an input unit 904, a sensor 905, a display unit 906, a user input unit 907, an interface unit 908, a memory 909, and a processor 910. Among them, the input unit 904 may include a graphic processor 9041 and a microphone 9042; the display unit 906 may include a display panel 9061; the user input unit 907 may include a touch panel 9071 and other input devices 9072; memory 909 may include application programs and an operating system.
Those skilled in the art will appreciate that the electronic device 900 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 910 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
And a processor 910, configured to acquire a first image through the rear camera.
And the processor 910 is configured to process the first image according to the characteristic parameter to obtain a target image.
Optionally, a processor 910 for extracting color parameters from the imaging in the user's pupil.
A processor 910 for determining the color parameter as the feature parameter.
Optionally, the processor 910 is further configured to identify a target object in the imaging in the pupil of the user.
The processor 910 is further configured to determine a volume parameter of the target object in the imaging of the pupil of the user.
The processor 910 is further configured to determine a volume parameter of the target object as the feature parameter if the volume parameter is greater than a preset volume parameter threshold.
The processor 910 is further configured to determine a first focal length according to the volume parameter.
The processor 910 is further configured to control the rear camera to shoot the target object at the first focal length, so as to obtain at least one first image.
The processor 910 is further configured to perform synthesis processing on the multiple first images according to the characteristic parameters to obtain a target image.
The processor 910 is further configured to identify a focus area in the imaging in the user's pupil, the focus area being an area in which the user's pupil is focused.
The processor 910 is further configured to determine a characteristic parameter according to a position of the focus area in the second image.
Optionally, the processor 910 is further configured to enlarge an image area, which at least includes the focus area, in the first image according to the characteristic parameter until a preset enlargement condition is met, so as to obtain the target image.
Wherein, the preset amplifying condition comprises: receiving target input of a user; or the enlarged image area meets the preset area range.
Optionally, the number of the second images is multiple, and the processor 910 is further configured to determine, according to the images in the user's pupils in the multiple second images, a deviation direction and a deviation distance of the user's pupils with respect to the preset reference point.
The processor 910 is further configured to determine a characteristic parameter according to the deviation direction and the deviation distance.
The processor 910 is further configured to determine a second focal length according to the characteristic parameter.
The processor 910 is further configured to control the front camera to capture a plurality of second images at a second focal length.
Optionally, the processor 910 is further configured to determine a plurality of third images from the plurality of first images according to the characteristic parameter.
The processor 910 is further configured to perform synthesis processing on the multiple third images to obtain a target image.
In the embodiment of the application, the first image is shot through the rear camera, and then the first image is processed according to the characteristic parameters to obtain the target image. Since the shooting direction of the rear camera is consistent with the sight line direction of the user, the imaging in the user's pupil in the second image shot by the front camera can characterize the scene seen by the user's eyes. Therefore, the target image obtained by processing the first image according to the characteristic parameters determined by the second image is almost different from the scene actually seen by the user, namely, the effect of 'seeing and getting' can be realized.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction implements the processes of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An image processing method, comprising:
acquiring a first image through a rear camera, wherein the shooting direction of the rear camera is consistent with the sight line direction of a user;
processing the first image according to the characteristic parameters to obtain a target image;
the characteristic parameters are determined according to a second image shot by the front camera, the second image comprises an image in the pupil of the user, and the image can represent the characteristics of a scene which the user really sees; the target image approaches the scene that the user really sees.
2. The method of claim 1, wherein before the processing the first image according to the feature parameters to obtain the target image, the method further comprises:
extracting color parameters from the imaging in the user's pupil;
determining the color parameter as the feature parameter.
3. The method of claim 1, wherein before the processing the first image according to the feature parameters to obtain the target image, the method further comprises:
identifying a target object in imaging in the user's pupil;
determining a volume parameter of the target object in the imaging of the user's pupil;
determining the volume parameter of the target object as the characteristic parameter under the condition that the volume parameter is larger than a preset volume parameter threshold;
determining a first focal length according to the volume parameter;
and controlling the rear camera to shoot the target object by the first focal length to obtain at least one first image.
4. The method of claim 3, wherein the processing the first image to obtain the target image according to the feature parameter comprises:
and synthesizing the plurality of first images according to the characteristic parameters to obtain the target image.
5. The method of claim 1, prior to said processing said first image according to the feature parameters to obtain the target image, the method further comprising:
identifying a focus region in imaging in the user's pupil, the focus region being a region in which the user's pupil is focused;
determining the characteristic parameter according to the position of the focus area in the second image.
6. The method according to claim 5, wherein the processing the first image according to the feature parameter to obtain a target image comprises:
amplifying an image area at least comprising the focusing area in the first image according to the characteristic parameters until a preset amplification condition is met, and obtaining the target image;
wherein the preset amplifying condition comprises:
receiving a target input of the user; alternatively, the first and second electrodes may be,
the enlarged image area satisfies a preset area range.
7. The method according to claim 1, wherein the second image is a plurality of images, and before the processing the first image according to the characteristic parameter to obtain the target image, the method further comprises:
determining deviation directions and deviation distances of the user pupils relative to a preset reference point according to the images of the user pupils in the second images;
determining the characteristic parameters according to the deviation direction and the deviation distance;
determining a second focal length according to the characteristic parameters;
and controlling the front camera to shoot at the second focal length to obtain a plurality of second images.
8. The method of claim 7, wherein processing the first image to obtain a target image according to the feature parameter comprises:
determining a plurality of third images from the plurality of first images according to the characteristic parameters;
and synthesizing the plurality of third images to obtain the target image.
9. An image processing apparatus characterized by comprising:
the shooting module is used for shooting a first image through a rear camera, and the shooting direction of the rear camera is consistent with the sight line direction of a user;
the processing module is used for processing the first image according to the characteristic parameters to obtain a target image; the characteristic parameters are determined according to a second image shot by the front camera, the second image comprises an image in the pupil of the user, and the image can represent the characteristics of a scene which the user really sees; the target image approaches the scene that the user really sees.
10. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 8.
CN202110120518.9A 2021-01-28 2021-01-28 Image processing method and device and electronic equipment Active CN112911148B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110120518.9A CN112911148B (en) 2021-01-28 2021-01-28 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110120518.9A CN112911148B (en) 2021-01-28 2021-01-28 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112911148A CN112911148A (en) 2021-06-04
CN112911148B true CN112911148B (en) 2022-10-14

Family

ID=76119806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110120518.9A Active CN112911148B (en) 2021-01-28 2021-01-28 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112911148B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015184838A (en) * 2014-03-21 2015-10-22 大木 光晴 Image processing device, method, and program
CN107465873A (en) * 2017-08-30 2017-12-12 努比亚技术有限公司 A kind of processing method of image information, equipment and storage medium
CN107909057A (en) * 2017-11-30 2018-04-13 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN110036407A (en) * 2016-09-12 2019-07-19 Elc 管理有限责任公司 For the system and method based on mankind's sclera and pupil correcting digital image color
WO2020166256A1 (en) * 2019-02-14 2020-08-20 株式会社資生堂 Information processing terminal, program, information processing system, and color correction method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104811609A (en) * 2015-03-03 2015-07-29 小米科技有限责任公司 Photographing parameter adjustment method and device
CN109993115B (en) * 2019-03-29 2021-09-10 京东方科技集团股份有限公司 Image processing method and device and wearable device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015184838A (en) * 2014-03-21 2015-10-22 大木 光晴 Image processing device, method, and program
CN110036407A (en) * 2016-09-12 2019-07-19 Elc 管理有限责任公司 For the system and method based on mankind's sclera and pupil correcting digital image color
CN107465873A (en) * 2017-08-30 2017-12-12 努比亚技术有限公司 A kind of processing method of image information, equipment and storage medium
CN107909057A (en) * 2017-11-30 2018-04-13 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
WO2020166256A1 (en) * 2019-02-14 2020-08-20 株式会社資生堂 Information processing terminal, program, information processing system, and color correction method

Also Published As

Publication number Publication date
CN112911148A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN106331504B (en) Shooting method and device
CN112714255B (en) Shooting method and device, electronic equipment and readable storage medium
CN108154465B (en) Image processing method and device
CN112887617B (en) Shooting method and device and electronic equipment
CN113141450A (en) Shooting method, shooting device, electronic equipment and medium
CN112087579B (en) Video shooting method and device and electronic equipment
CN113840070A (en) Shooting method, shooting device, electronic equipment and medium
CN113014798A (en) Image display method and device and electronic equipment
CN111770277A (en) Auxiliary shooting method, terminal and storage medium
CN108833786B (en) Mode control method and electronic equipment
CN112839166B (en) Shooting method and device and electronic equipment
CN112911148B (en) Image processing method and device and electronic equipment
CN112887606B (en) Shooting method and device and electronic equipment
CN112887624B (en) Shooting method and device and electronic equipment
CN115134532A (en) Image processing method, image processing device, storage medium and electronic equipment
US11252341B2 (en) Method and device for shooting image, and storage medium
CN112653841B (en) Shooting method and device and electronic equipment
CN114390206A (en) Shooting method and device and electronic equipment
CN111654623B (en) Photographing method and device and electronic equipment
CN113989387A (en) Camera shooting parameter adjusting method and device and electronic equipment
CN111464734B (en) Method and device for processing image data
CN114430456A (en) Image capturing method, image capturing apparatus, and storage medium
CN114244999A (en) Automatic focusing method and device, camera equipment and storage medium
WO2019061051A1 (en) Photographing method and photographing device
CN112887613B (en) Shooting method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant