CN108965849B - Image processing method, device and system - Google Patents

Image processing method, device and system Download PDF

Info

Publication number
CN108965849B
CN108965849B CN201810841958.1A CN201810841958A CN108965849B CN 108965849 B CN108965849 B CN 108965849B CN 201810841958 A CN201810841958 A CN 201810841958A CN 108965849 B CN108965849 B CN 108965849B
Authority
CN
China
Prior art keywords
face
terminal
information
display information
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810841958.1A
Other languages
Chinese (zh)
Other versions
CN108965849A (en
Inventor
张艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201810841958.1A priority Critical patent/CN108965849B/en
Publication of CN108965849A publication Critical patent/CN108965849A/en
Application granted granted Critical
Publication of CN108965849B publication Critical patent/CN108965849B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to an image processing method, device and system, and belongs to the field of electronic technology application. The method comprises the following steps: acquiring a face image through a camera of a terminal; acquiring current environment light information of the terminal, wherein the current environment light information is used for reflecting brightness information of the current environment of the terminal; projecting structured light to the face and determining the intensity of the current structured light reflected by the face; determining target display information based on the current ambient light information and the intensity of the current structured light; and displaying the face image based on the target display information. The application of the structured light technology in the field of image processing is realized, and the application scenes of the structured light are enriched. The present disclosure is directed to processing an image.

Description

Image processing method, device and system
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method, apparatus, and system.
Background
The structured light technology is used as an optical imaging method, and the working principle of the structured light technology is that light rays of structured light are projected onto a specified object through a structured light emitter, after the light rays are reflected by the specified object, image depth information of the specified object is obtained through an image sensor capable of collecting the light rays, and meanwhile, three-dimensional image information of the specified object can be obtained by matching with image position information of the specified object collected by a visible light image sensor.
At present, the structured light technology is mainly applied to the field of face recognition, and the application scene is single.
Disclosure of Invention
In order to implement the method, the device and the system for processing the image, which are provided by the embodiment of the disclosure, the problem that the application scene of the structured light technology is single can be solved. The technical scheme is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided an image processing method, including;
acquiring a face image through a camera of a terminal;
acquiring current environment light information of the terminal, wherein the current environment light information is used for reflecting brightness information of the current environment of the terminal;
projecting structured light to the face and determining the intensity of the current structured light reflected by the face;
determining target display information based on the current ambient light information and the intensity of the current structured light;
and displaying the face image based on the target display information.
Optionally, the determining target display information based on the current ambient light information and the intensity of the current structured light includes:
acquiring a specified corresponding relation;
and inquiring the specified corresponding relation based on the current ambient light information and the intensity of the current structured light to obtain the target display information, wherein the specified corresponding relation records the corresponding relation of the ambient light information, the structured light intensity and the face display information.
Optionally, the obtaining of the designated corresponding relationship includes:
in a designated training period, after the camera is started every time, projecting structured light to the face of a shooting area of the camera;
collecting ambient light information and determining the intensity of structured light reflected by a human face;
determining face display information;
and establishing the specified corresponding relation based on the environment light information collected for multiple times, the collected structural light intensity and the corresponding face display information.
Optionally, the determining the face display information includes:
determining a plurality of alternative facial display information based on the collected ambient light information and the collected structured light intensity;
acquiring a face image to be adjusted through a camera of the terminal;
displaying the face image to be adjusted based on the plurality of alternative face display information respectively;
receiving a selection instruction aiming at a target face image, wherein the target face image is an image in a plurality of displayed face images to be adjusted;
and determining the alternative face display information corresponding to the target face image as the face display information.
Optionally, the acquiring, by the camera of the terminal, a face image to be adjusted includes:
carrying out face recognition on a user using the terminal through a structured light technology;
when the user using the terminal comprises a target user after identification, acquiring a face image of the target user through a camera of the terminal, and determining the acquired face image as the face image to be adjusted.
Optionally, the displaying the face image based on the target display information includes:
adjusting the skin colors of all human faces in the human face image based on the target display information;
or when the face image comprises at least two faces and the face of the target user, adjusting the skin color of the face of the target user in the face image based on the target display information.
Optionally, the obtaining of the designated corresponding relationship includes:
receiving the designated corresponding relation sent by the server, wherein the designated corresponding relation is established by the server based on the environment light information, the acquired structural light intensity and the corresponding face display information acquired by each sample terminal for multiple times;
each sample terminal is used for projecting structured light to the face of a shooting area of a camera of the sample terminal after the camera of the sample terminal is started every time, collecting ambient light information, determining the intensity of the structured light reflected by the face, and determining face display information, wherein the model of the sample terminal is the same as that of the terminal.
Optionally, the obtaining of the designated corresponding relationship includes:
determining a current image display mode;
and determining the designated corresponding relation corresponding to the current image display mode in a plurality of preset corresponding relations, wherein the plurality of preset corresponding relations correspond to a plurality of image display modes one to one.
Optionally, the face display information includes display information of at least two face regions, and the display information includes at least one of brightness information, contrast information, transparency information, color information, and a filtering algorithm.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including;
the terminal comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is configured to acquire a face image through a camera of the terminal;
the second acquisition module is configured to acquire current ambient light information of the terminal, wherein the current ambient light information is used for reflecting brightness information of the current environment of the terminal;
a first determination module configured to project structured light toward a human face and determine an intensity of current structured light reflected by the human face;
a second determination module configured to determine target display information based on the current ambient light information and the intensity of the current structured light;
an adjustment module configured to display the face image based on the target display information.
Optionally, the second determining module includes:
an acquisition submodule configured to acquire a specified correspondence;
a query sub-module configured to query the specified correspondence, which records a correspondence of ambient light information, structured light intensity, and face display information, based on the current ambient light information and the intensity of the current structured light to obtain the target display information.
Optionally, the obtaining sub-module includes:
the structured light projection unit is configured to project structured light to the face of the shooting area of the camera after the camera is started every time in a specified training period;
a structured light collection unit configured to collect ambient light information and determine an intensity of structured light reflected by a human face;
a determination unit configured to determine face display information;
an establishing unit configured to establish the specified correspondence relationship based on the plural times of collected ambient light information, collected structured light intensity, and corresponding face display information.
Optionally, the determining unit is configured to:
determining a plurality of alternative facial display information based on the collected ambient light information and the collected structured light intensity;
acquiring a face image to be adjusted through a camera of the terminal;
displaying the face image to be adjusted based on the plurality of alternative face display information respectively;
receiving a selection instruction aiming at a target face image, wherein the target face image is an image in a plurality of displayed face images to be adjusted;
and determining the alternative face display information corresponding to the target face image as the face display information.
Optionally, the determining unit is configured to:
carrying out face recognition on a user using the terminal through a structured light technology;
when the user using the terminal comprises a target user after identification, acquiring a face image of the target user through a camera of the terminal, and determining the acquired face image as the face image to be adjusted.
Optionally, the adjusting module is configured to:
adjusting the skin colors of all human faces in the human face image based on the target display information;
or when the face image comprises at least two faces and the face of the target user, adjusting the skin color of the face of the target user in the face image based on the target display information.
Optionally, the obtaining sub-module is configured to:
receiving the designated corresponding relation sent by the server, wherein the designated corresponding relation is established by the server based on the environment light information, the acquired structural light intensity and the corresponding face display information acquired by each sample terminal for multiple times;
each sample terminal is used for projecting structured light to the face of a shooting area of a camera of the sample terminal after the camera of the sample terminal is started every time, collecting ambient light information, determining the intensity of the structured light reflected by the face, and determining face display information, wherein the model of the sample terminal is the same as that of the terminal.
Optionally, the obtaining sub-module is configured to:
determining a current image display mode;
and determining the designated corresponding relation corresponding to the current image display mode in a plurality of preset corresponding relations, wherein the plurality of preset corresponding relations correspond to a plurality of image display modes one to one.
Optionally, the face display information includes display information of at least two face regions, and the display information includes at least one of brightness information, contrast information, transparency information, color information, and a filtering algorithm.
According to a third aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including;
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of image processing of any of the first aspects.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored therein instructions that, when run on a processing component, cause the processing component to perform the method of image processing according to any one of the first aspects.
According to a fifth aspect of embodiments of the present disclosure, there is provided an image processing system, the system comprising:
a terminal and a server, wherein the terminal is the image processing apparatus according to any one of the second aspect;
alternatively, the terminal is the image processing apparatus according to the third aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the image processing method provided by the embodiment of the disclosure can acquire a face image and current ambient light information through a camera of a terminal, simultaneously project structured light to a face, determine the intensity of the current structured light reflected by the face, determine target display information based on the acquired current ambient light information and the intensity of the current structured light, and display the face image based on the target display information.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure, the drawings that are needed to be used in the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present disclosure, and that other drawings can be obtained by those skilled in the art without inventive effort.
FIG. 1 is a flow diagram illustrating an image processing method according to an exemplary embodiment;
FIG. 2 is a schematic diagram illustrating the structure of an image processing terminal according to one exemplary embodiment;
FIG. 3 is a flow diagram illustrating another method of image processing according to an exemplary embodiment;
FIG. 4 is a schematic diagram illustrating a face region in accordance with an exemplary embodiment;
FIG. 5 is a flow diagram illustrating yet another method of image processing according to an exemplary embodiment;
FIG. 6 is a schematic diagram illustrating a face image according to an exemplary embodiment;
FIG. 7 is a diagram illustrating an environment for implementing an image processing method according to the related art;
FIG. 8 is a schematic diagram illustrating another face image according to an exemplary embodiment;
FIG. 9 is a schematic illustration of yet another face image shown in accordance with an exemplary embodiment;
FIG. 10 is a schematic illustration of yet another face image shown in accordance with an exemplary embodiment;
FIG. 11 is a schematic illustration of a face image according to another exemplary embodiment;
FIG. 12 is a block diagram illustrating an apparatus for processing an image according to an exemplary embodiment;
FIG. 13 is a block diagram illustrating another image processing apparatus according to an exemplary embodiment;
FIG. 14 is a block diagram illustrating yet another image processing apparatus according to an exemplary embodiment;
FIG. 15 is a block diagram of yet another image processing apparatus shown in accordance with an exemplary embodiment;
FIG. 16 is a block diagram illustrating an apparatus for image processing according to an exemplary embodiment;
fig. 17 is a block diagram illustrating another apparatus for image processing according to an exemplary embodiment.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more clear, the present disclosure will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. All other embodiments, which can be derived by one of ordinary skill in the art from the embodiments disclosed herein without making any creative effort, shall fall within the scope of protection of the present disclosure.
The embodiment of the present disclosure provides an image processing method, which applies a structured light technology to the field of image processing, and expands an application scenario of the structured light technology, as shown in fig. 1, the method includes:
step 101, acquiring a face image through a camera of a terminal.
And 102, acquiring current environment light information of the terminal, wherein the current environment light information is used for reflecting brightness information of the current environment of the terminal.
Step 103, projecting structured light to the face, and determining the intensity of the current structured light reflected by the face.
And 104, determining target display information based on the current ambient light information and the current structured light intensity.
And 105, displaying the face image based on the target display information.
To sum up, the image processing method provided by the embodiment of the present disclosure may collect a face image and current ambient light information through a camera of a terminal, project structured light to a face, determine intensity of current structured light reflected by the face, determine target display information based on the obtained current ambient light information and the intensity of the current structured light, and display the face image based on the target display information.
Based on the structured light technology, the embodiment of the present disclosure provides an image processing method, which may be applied to a terminal with an image processing function, such as a mobile phone, as shown in fig. 2, where the terminal 30 includes a structured light emitter 301 for projecting structured light to a face to be detected; a structured light sensor 302, configured to collect structured light reflected by a face to be detected; a brightness sensor (also called a light sensor) 303, configured to collect current ambient light information of the terminal; a camera 304 for capturing images, the camera 304 may include a visible light image sensor 3041. As shown in fig. 3, the image processing method includes:
step 201, acquiring a face image through a camera of a terminal.
In the embodiment of the disclosure, after detecting that a user starts a trigger operation (for example, an operation of taking a picture or taking a video through a camera) on a camera of the terminal, the terminal starts the camera on the terminal to acquire a face image. The face image refers to an image containing a face of a person, and at least one face of the person may exist in the face image.
Because the user usually carries out the auto heterodyne through leading camera, then the probability that leading camera is shot the face image than rear camera is higher, consequently, the terminal can start this leading camera and gather the face image after detecting the user to its opening of leading camera and triggering the operation.
Optionally, the terminal may also acquire a face image through a rear camera, which is not limited in the embodiment of the present disclosure.
Step 202, collecting current environment light information of the terminal, wherein the current environment light information is used for reflecting brightness information of the current environment of the terminal.
For example, after the terminal collects the face image, the current ambient light information of the terminal may be collected through the brightness sensor, and the current ambient light information is used to reflect the brightness information of the current environment of the terminal. Optionally, the terminal may also determine the brightness information of the current environment through the camera.
The unit of the luminance information may be lux or cd/m2) Or nits (nits).
For example, in the outdoors on cloudy days, the current ambient light information of the terminal is collected by a brightness sensor of the terminal to be 200 lux.
Step 203, projecting structured light to the face, and determining the intensity of the current structured light reflected by the face.
The structured light emitter of the terminal can project structured light to the face area, and the structured light sensor can collect the structured light reflected by the face area and determine the intensity of the current structured light based on the collected structured light. The intensity of the current structured light may be in candela (candela), abbreviated cd, or other units such as candle or bronzing. In the embodiment of the present disclosure, the acquired light intensity of the structure may be characterized by the intensities of a plurality of acquisition points (e.g. 200-2000 points) on the face, and may also be characterized by the intensities of a plurality of face regions on the face. It should be noted that the terminal may divide the region where the face is located into a plurality of face regions based on a preset rule, and the finer the division granularity is, the higher the accuracy of subsequent determination of the display information is. For example, as shown in fig. 4, the region where the face is located may be divided into: eye region 1, nose region 2, ear region 3, forehead region 4, cheek region 5, and chin region 6; or, the area where the face is located may be divided into: two regions, a left face region and a right face region respectively; or an upper half area and a lower half area, respectively. The structured light intensity on each face region may comprise the structured light intensity of an acquisition point on that face region, or be a weighted average of the structured light intensities of acquisition points on that face region.
Optionally, the structured light may be near-infrared light, and accordingly, the structured light emitter is a near-infrared light emitter, and the structured light sensor is a near-infrared light sensor. Therefore, the situation that a user perceives the projection of light when using the camera can be avoided, and the user experience is improved.
And step 204, determining target display information based on the current ambient light information and the current structured light intensity.
The target display information may include display information of a region where a complete face is located, or may include display information of a plurality of face regions. The plurality of face regions may correspond to the plurality of face regions obtained by dividing the structural light intensity acquired in step 203 one by one, for example, the acquired structural light intensity is represented by structural light intensities of 5 face regions, and correspondingly, the determined target display information includes display information of the 5 face regions, so that the fine determination of the display information may be implemented.
The display information includes related parameter information when the face contour region is displayed, and for example, the display information includes: at least one of brightness information, contrast information, transparency information, color information, and a filtering algorithm (also called a filter). For example, the color information may include at least one of red, green, and blue, and the luminance information may be characterized by a luminance value or a gray value of a pixel in the image.
As shown in fig. 5, the terminal may determine the target display information by querying a preset designated corresponding relationship, and the process may be implemented by the following steps:
step 2041, obtain the specified correspondence.
Optionally, the specified correspondence is a plurality of correspondences between the ambient light information acquired by the terminal for a plurality of times, the acquired structural light intensity, and the corresponding face display information, and the terminal may adjust the acquired face image based on the correspondences.
In the embodiment of the present disclosure, there may be multiple ways for the terminal to obtain the designated corresponding relationship, and the present disclosure takes the following two ways as examples for description:
in the first acquisition mode, the designated corresponding relationship is acquired locally at the terminal.
And step A1, projecting structured light to the face of the shooting area of the camera after the camera is started every time in a designated training period.
In the embodiment of the present disclosure, the terminal may perform the establishment of the designated corresponding relationship through a user operation, for example, after the terminal leaves a factory, the terminal prompts the user to execute the establishment process of the designated corresponding relationship in a designated training time period. The specified training period may be a period of n times the camera is used, where n is a positive integer greater than 1, or the specified training period may be a period of a specified duration, such as 1 week, 2 weeks, or 1 month.
In the appointed training time period, after the camera is started every time, whether a face exists in the shooting area of the camera can be detected, and when the face exists in the shooting area of the camera, the terminal projects structured light to the face in the shooting area through the structured light emitter so as to establish a subsequent appointed corresponding relation. This may reduce the number of invalid data acquisitions.
Step A2, collecting the ambient light information and determining the intensity of the structured light reflected by the human face.
The terminal may collect ambient light information through its brightness sensor, collect structured light reflected by the face through the structured light sensor, and determine the intensity of the reflected structured light, and the process may refer to step 203 above.
Step a3, face display information is determined.
Optionally, the terminal determines the face display information each time the camera is used by:
step a31, determining a plurality of candidate facial display information based on the collected ambient light information and the collected structured light intensity.
The plurality of candidate face display information are different candidate face display information from each other. Referring to step 204, the display information includes at least one of brightness information, contrast information, transparency information, color information, and filtering algorithm. It is assumed that the display information includes a filtering algorithm, and the plurality of candidate face display information are a first filtering algorithm and a second filtering algorithm, respectively. For example, the first filtering algorithm and the second filtering algorithm may be a bilateral filtering algorithm and a guided filtering algorithm, respectively.
And A32, acquiring a face image to be adjusted through a camera of the terminal.
In the embodiment of the disclosure, the manner of acquiring the face image to be adjusted may be various, and in an optional manner, the terminal may directly determine the image with the face acquired by the camera as the face image to be adjusted; in another alternative mode, the terminal may perform face recognition on an image with a face acquired by the camera, and then determine a face image to be adjusted based on a face recognition result. In this alternative, step a32 may include:
step A321, carrying out face recognition on a user using the terminal through a structured light technology.
The user using the terminal refers to a user currently using the terminal, and further, after the user currently using the terminal is started, the user using the terminal is detected by the terminal.
In the embodiment of the present disclosure, the terminal may perform face recognition on a user using the terminal according to the structured light projected and collected to a face, so as to recognize whether the user currently using the terminal includes a target user. The face recognition precision is high by adopting the structured light technology. Optionally, the terminal may also perform face recognition on the user using the terminal by using other technologies (for example, by using a preset face recognition algorithm), which is not limited in this disclosure. The target user is a user pre-designated in the terminal, for example, the owner of the terminal, or a related user of the owner of the terminal, such as a spouse or a relative thereof. The pre-specified user is typically set by the owner of the terminal.
And when the user currently using the terminal comprises the target user, executing the step A322, and when the user currently using the terminal does not comprise the target user, stopping acquiring the facial image to be adjusted. At this time, prompt information can be generated to prompt the user that the face recognition fails.
Step A322, when the user using the terminal obtained by identification comprises a target user, acquiring a face image of the target user through a camera of the terminal, and determining the acquired face image as a face image to be adjusted.
When the terminal identifies that the user currently using the terminal comprises the target user, a camera of the terminal acquires a face image comprising the face of the target user, and the face image generally comprises a face. Optionally, when more than one face is in the acquired face image, the terminal may directly determine the face image as a face image to be adjusted, or the terminal extracts the face image to be adjusted from an area where the face of the target user is located in the acquired face image.
Optionally, before the step a321, the terminal may detect the number of people using the terminal after the camera is turned on, and when the number of people using the terminal is 1, the step a321 is performed, and when the number of people using the terminal is not 1, the step a321 is not performed. Therefore, the terminal can be ensured to recognize only 1 face when executing the step A321, the operation cost of recognition is reduced, and the recognition efficiency is improved.
In an example, assuming that the terminal is a mobile phone, the owner of the terminal is xiaoming, and the target users are 2, respectively xiaoming and xiaobai, when the xiaoming performs self-shooting by one person, the step a321 is adopted to perform face recognition on the user using the mobile phone through a structured light technology, and the user using the mobile phone is obtained by recognition and includes the target user: if the image is small, acquiring a small and bright face image through a camera of the mobile phone, and determining the acquired face image as a face image to be adjusted; when the self-timer is taken by the little light and the little red, the face recognition is carried out on the user using the mobile phone by adopting the step A321 through the structured light technology, and the user using the mobile phone is recognized to comprise a target user: if the human face image is bright, acquiring the bright and red human face images (the acquired images comprise 2 human faces) through a camera of the mobile phone, and determining the acquired human face images as the human face images to be adjusted; or the mobile phone extracts the region where the small and clear face is located in the collected face image (including 2 faces) to obtain the face image (including 1 face) to be adjusted.
And step A33, displaying the face image to be adjusted based on the plurality of candidate face display information respectively.
The terminal may display the face image to be adjusted based on the multiple pieces of alternative face display information determined in step a31, as shown in fig. 6, assuming that the multiple pieces of alternative face display information are respectively a first filtering algorithm and a second filtering algorithm, the terminal displays the face image to be adjusted in the manner shown in fig. 6, where the picture 11 is the face image to be adjusted displayed by using the first filtering algorithm; the picture 12 is a face image to be adjusted displayed by adopting a second filtering algorithm. Optionally, the terminal may also display the original image, that is, the original unprocessed face image to be adjusted, so as to generate a good contrast with other images, which is convenient for the user to select.
Step A34, receiving a selection instruction for a target face image, wherein the target face image is an image in a plurality of displayed face images to be adjusted.
For example, the user may select the target face image by clicking or double clicking, and accordingly, the terminal receives a selection instruction for the target face image. For example, referring to fig. 6, the selection instruction in fig. 6 indicates that the image corresponding to the picture 12 is the target face image.
Step a35, determining the alternative face display information corresponding to the target face image as the face display information.
Still taking the example in the step a34 as an example, since the selection instruction indicates that the image corresponding to the picture 12 is the target face image and the picture 12 is the face image to be adjusted displayed by using the second filtering algorithm, the face display information is the second filtering algorithm.
It should be noted that the face display information may include display information of an area where a complete face is located, or may include display information of a plurality of face areas. The specific division can refer to step 204 above. The step A3 is described only by taking as an example that the face display information may include display information of an area where a complete face is located, and when the face display information includes display information of a plurality of face areas, the step A3 may be referred to in the process of determining the display information of each face area, which is not described in detail in this disclosure. For example, in step a31, the face display information includes display information of 2 human face regions (e.g., a left face region and a right face region), and each of the candidate face display information includes display information of the two human face regions, for example, the candidate face display information is: alternative face display information 1: left face area: first filtering algorithm, right face region: a second filtering algorithm; alternative face display information 2: left face area: third filtering algorithm, right face region: and a fourth filtering algorithm.
Step a4, establishing a specified correspondence based on the multiple collected ambient light information, the collected structured light intensity, and the corresponding face display information.
After the appointed training time period is finished, the terminal stores the collected ambient light information, the collected structural light intensity and the corresponding face display information every time, and the appointed corresponding relation of the three is established. Assuming that the structural light intensity is characterized by the intensity of 2 human face regions of a human face, wherein the 2 human face regions can be a left face region and a right face region, and the face display information comprises contrast information and transparency information; the specified correspondence may be as shown in table 1, for example, the ambient light information is: the brightness was 200lux, the structured light intensity was: the left face area is 50cd, the right face area is 60cd, and the corresponding face display information is: the contrast information is: 1000:1, transparency information 0.4.
TABLE 1
Figure BDA0001745783340000121
It should be noted that the correspondence in table 1 is merely an exemplary description, and does not represent all the correspondences that can be acquired by the terminal.
In the embodiment of the disclosure, the terminal establishes the designated corresponding relationship through user operation, so that the designated corresponding relationship is more suitable for the preference of the user, is more personalized, and the user experience can be effectively improved in the using process.
In the second acquisition mode, the designated corresponding relationship is acquired from the server.
Referring to fig. 7, fig. 7 is a schematic diagram of an implementation environment of an image processing method according to an embodiment of the present disclosure, where the implementation environment includes: a server 110 and a plurality of terminals 120, the plurality of terminals 120 including a plurality of sample terminals.
The server 110 may be a server, a server cluster composed of several servers, or a cloud computing service center. The terminal 120 can be a smartphone, a computer, a multimedia player, an e-reader, a wearable device, etc.
The connection between the server 110 and the terminal 120 may be established through a wired network or a wireless network.
The image processing method provided by the embodiment of the present disclosure is assumed to be executed by one of the terminals, the model of the sample terminals is the same as that of the terminal, and the sample terminals may include the terminal (that is, the terminal also executes the action of the sample terminal), or may not include the terminal, which is not limited in the embodiment of the present disclosure.
By adopting the second obtaining method, the terminal needs to obtain the designated corresponding relationship from the server in the process of executing the image processing method, and the terminal can receive the designated corresponding relationship sent by the server, wherein the designated corresponding relationship is established by the server based on the ambient light information, the acquired structural light intensity and the corresponding face display information acquired by the sample terminals for multiple times. The model of the sample terminal is the same as that of the terminal, so that the finally determined specified corresponding relation can be ensured to better meet the requirements of the terminal.
The process of establishing the designated corresponding relation by the server comprises the following steps:
and step B1, after the camera of each sample terminal is turned on, each sample terminal projects structured light to the face of the shooting area of the camera of the sample terminal.
In the embodiment of the present disclosure, the designated corresponding relationship may be established by designated personnel cooperating with the server, and the terminal operated by the designated personnel is the sample terminal.
The appointed personnel can operate the sample terminal according to the preset rule so that each sample terminal projects the structured light to the face of the shooting area of the camera of the sample terminal after the camera of each sample terminal is started.
Optionally, after the camera is turned on each time, each sample terminal can detect whether a face exists in a shooting area of the camera, and when the face exists in the shooting area of the camera, structured light is projected to the face in the shooting area through the structured light emitter so as to establish a subsequent specified corresponding relationship. This may reduce the number of invalid data acquisitions.
And step B2, collecting the ambient light information by each sample terminal and determining the intensity of the structured light reflected by the human face.
Step B2 can refer to step a2, which is not described in detail herein.
Step B3, each sample terminal determines face display information.
Step B3 can refer to step A3, which is not described in detail herein.
Step B4, each sample terminal transmits the determined face display information to the server.
Each sample terminal may transmit the determined face display information to the server through a wired network or a wireless network established with the server.
And step B5, the server establishes a specified corresponding relation based on the environment light information collected by each sample terminal for multiple times, the collected structural light intensity and the corresponding face display information.
By adopting the second acquisition mode, the terminal can acquire the designated corresponding relation from the server without establishing the designated corresponding relation, so that the operation cost of the terminal is reduced.
It should be noted that, a plurality of preset corresponding relationships may be obtained in the terminal, the specified corresponding relationship is only one corresponding relationship, and the obtaining process of each corresponding relationship may refer to the first obtaining manner or the second obtaining manner. The terminal may select a designated corresponding relationship from a plurality of preset corresponding relationships based on its own requirements, and then step 2041 may be replaced with:
and step C1, determining the current image display mode.
In the embodiment of the present disclosure, based on the preset division rule, multiple image display modes may be set, for example, the image display mode is set according to the skin color of the race: asian display mode, european display mode, african display mode, and the like; or, according to the skin color display effect, setting an image display mode: a sweet mode, a ruddy mode, a fair mode, a vintage mode, etc., and the plurality of image display modes correspond to a plurality of preset corresponding relations one to one.
The current image display mode may be a default display mode of the terminal, or may be a display mode set by the user in advance or in real time.
Step C2, among the plurality of preset correspondences, determines a designated correspondence corresponding to the current image display mode.
The one-to-one correspondence between the plurality of image display modes and the plurality of preset correspondences may be stored in a table manner, and the terminal may query the stored table based on the current image display mode to obtain the correspondence corresponding to the current image display mode, and determine the correspondence as the designated correspondence.
It should be noted that the one-to-one correspondence relationship may also be stored in a form of a queue or an index map, which is not limited in the embodiment of the present disclosure.
Step 2042, based on the current ambient light information and the current structured light intensity, querying for a designated correspondence to obtain target display information, and recording the correspondence of the ambient light information, the structured light intensity, and the face display information in the designated correspondence.
For example, taking the specified correspondence relationship shown in table 1 as an example, assuming that the luminance of the ambient light of the user using the terminal is 100lux, the structured light intensity of the left face region of the user determined by the terminal is 100cd, and the structured light intensity of the right face region is 45cd, it can be seen from looking up table 1 that the target face display information: the contrast information is: 500: 1; the transparency information is: 0.7.
and step 205, displaying the face image based on the target display information.
In the embodiment of the present disclosure, the manner of obtaining the designated correspondence is different, and the manner of adjusting the skin color in the face image may also be different, and the embodiment of the present disclosure takes the following two adjustment manners as examples for explanation:
in the first adjustment mode, when the corresponding relation is specified and obtained locally from the terminal, the face image is displayed based on the target display information.
In a first implementation manner, the terminal may adjust the skin color of all faces in the face image based on the target display information.
In a second implementation manner, when the face image includes at least two faces and includes the face of the target user, the skin color of the face of the target user in the face image is adjusted based on the target display information.
In a third implementation manner, when the face image includes at least two faces, the skin color of the designated face in the face image is adjusted based on the target display information. Therefore, the face skin color can be adjusted in a targeted manner.
In the first adjustment manner, please refer to fig. 8, the face image shown in the terminal of fig. 8 includes two faces a and B, and the face a is the face of the target user, and at the same time, the ambient light information and the structural light intensity of the terminal used by the user are consistent with the ambient light information and the structural light intensity in step 2042, assuming that the target display information is the second filtering algorithm at this time, the terminal needs to adjust the face in the face image based on the second filtering algorithm, if the first achievable manner is adopted, the skin colors of all the faces in the face image need to be adjusted based on the second filtering algorithm, the adjusted face image is shown in fig. 9, if the second achievable manner is adopted, the skin color of the face a of the target user in the face image needs to be adjusted based on the second filtering algorithm, the adjusted face image is shown in fig. 10, if the third achievable manner is adopted, the skin color of the face specified by the user in the face image needs to be adjusted based on the second filtering algorithm, and it is assumed that the face with the adjusted skin color in the face image specified by the user at this time is the face B, and the adjusted face image is shown in fig. 11.
In the second adjustment mode, when the designated corresponding relation is acquired from the server, the face image is displayed based on the target display information.
In a first implementation, the skin color of all faces in the face image is adjusted based on the target display information.
In a second implementation manner, when the face image includes at least two faces, the skin color of the designated face in the face image is adjusted based on the target display information.
In the second adjustment mode, the face image shown in the terminal in fig. 8 is taken as an example, the face image includes two faces a and B, and the face a is the face of the target user, meanwhile, the ambient light information and the structural light intensity of the user using the terminal are consistent with the ambient light information and the structural light intensity in step 2042, it is assumed that the target display information at this time is the second filtering algorithm, the terminal needs to adjust the face in the face image based on the second filtering algorithm, if the first achievable mode is adopted, the skin colors of all the faces in the face image need to be adjusted based on the second filtering algorithm, the adjusted face image is shown in fig. 9, if the second achievable mode is adopted, the skin color of the face specified by the user in the face image needs to be adjusted based on the second filtering algorithm, it is assumed that the face with the skin color adjusted in the face image specified by the user at this time is the face B, the face image obtained by the adjustment is shown in fig. 11.
In order to make the objects, technical solutions and advantages of the present disclosure clearer, a practical example is used herein to describe an embodiment of the present disclosure, assuming that a mobile phone is purchased from xiao ming, the mobile phone may process an obtained image by using the method in the present disclosure, when the mobile phone detects that the camera is activated and triggered by the xiao ming, for example, the xiao ming is using the camera to perform self-shooting, a front camera of the mobile phone may collect a face image of the xiao ming and ambient light information around the mobile phone at the time, and at the same time, the mobile phone may project face structured light to the xiao ming and determine intensity of the structured light reflected by the face, assuming that the ambient light brightness around the mobile phone is 200lux at the time, the structured light intensity of a left face area of the face determined by the mobile phone is 50cd, the structured light intensity of a right face area of the face is 60cd, and the mobile phone determines data of the intensity of the ambient light reflected by the face and the ambient light, and inquiring the designated corresponding relation, and determining that the brightness of the face complexion when the user self-shoots is required to be adjusted to 30cd and the transparency is required to be adjusted to 0.4, so that the quality of the image shot after the face complexion is adjusted is higher. The specified corresponding relationship may be obtained by the mobile phone obtaining a one-to-one corresponding relationship between the ambient light information, the structural light intensity and the corresponding face display information in different collected use scenes when the mobile phone is used for shooting or shooting several times or within a period of time after the mobile phone is purchased, or obtained from a server by the mobile phone, that is, the specified corresponding relationship is obtained by the server training.
It should be noted that, the order of the steps of the image processing method provided in the embodiment of the present invention may be appropriately adjusted, and the steps may also be increased or decreased according to the circumstances, and any method that can be easily conceived by those skilled in the art within the technical scope of the present invention shall be included in the protection scope of the present invention, and therefore, the detailed description thereof is omitted.
To sum up, the image processing method provided by the embodiment of the present disclosure may determine the face display information based on the ambient light information acquired by the camera of the terminal and the acquired structural light intensity, and establish an appointed corresponding relationship between the three, and at the same time, determine the target display information based on the preset appointed corresponding relationship corresponding to the current image display mode, and adjust the skin color of the face in the face image based on the different acquiring manners of the appointed corresponding relationship, thereby implementing the application of the structured light technology in the field of face image processing, and enriching the application scene of structured light.
Moreover, the intensity of the structured light reflected by the face can reflect the depth information of the face, so that the distances between different areas of the face and the terminal can be reflected, the specified corresponding relation actually reflects the brightness information of the environment, the distances between different areas of the face and the terminal and the corresponding relation of face display information, the skin color in the face image is adjusted based on the specified corresponding relation, and the face image which is more suitable for the brightness of the current environment and the distance state between the face and the terminal can be obtained.
The present disclosure provides an image processing apparatus 40, as shown in fig. 12, the apparatus 40 including:
a first acquisition module 401 configured to acquire a face image through a camera of a terminal;
a second collecting module 402, configured to collect current ambient light information of the terminal, where the current ambient light information is used to reflect brightness information of an environment where the terminal is currently located;
a first determination module 403 configured to project structured light toward the face and determine an intensity of the current structured light reflected by the face;
a second determination module 404 configured to determine target display information based on the current ambient light information and the intensity of the current structured light;
an adjustment module 405 configured to display the face image based on the target display information.
To sum up, in the image processing apparatus provided in the embodiment of the present disclosure, the first acquisition module may acquire a face image through a camera of the terminal, the second acquisition module acquires current ambient light information, the first determination module projects structured light to the face and determines intensity of current structured light reflected by the face, the second determination module determines target display information based on the acquired current ambient light information and the intensity of the current structured light, the adjustment module displays the face image based on the target display information, and the image processing apparatus combines a structured light technology and a portrait processing technology, thereby implementing application of the structured light technology in the field of face image processing and enriching application scenes of the structured light technology.
Optionally, as shown in fig. 13, the second determining module 404 includes:
an obtaining sub-module 4041 configured to obtain the specified correspondence;
a query sub-module 4042 configured to query the specified correspondence, which records the correspondence of the ambient light information, the structural light intensity, and the face display information, based on the current ambient light information and the intensity of the current structural light to obtain the target display information.
Optionally, as shown in fig. 14, the obtaining sub-module 4041 includes:
a structured light projection unit 4041A configured to project structured light to a face of a shooting area of the camera each time the camera is turned on in a specified training period;
a structured light acquisition unit 4041B configured to acquire ambient light information and determine an intensity of structured light reflected by a human face;
a determination unit 4041C configured to determine face display information;
an establishing unit 4041C configured to establish the specified correspondence relationship based on the plural times of collected ambient light information, collected structural light intensity, and corresponding face display information.
Optionally, the determining unit 4041C is configured to:
determining a plurality of alternative facial display information based on the collected ambient light information and the collected structured light intensity;
acquiring a face image to be adjusted through a camera of the terminal;
displaying the face image to be adjusted based on the plurality of alternative face display information respectively;
receiving a selection instruction aiming at a target face image, wherein the target face image is an image in a plurality of displayed face images to be adjusted;
and determining the alternative face display information corresponding to the target face image as the face display information.
Optionally, the determining unit 4041C is configured to:
carrying out face recognition on a user using the terminal through a structured light technology;
when the user using the terminal comprises a target user after identification, acquiring a face image of the target user through a camera of the terminal, and determining the acquired face image as the face image to be adjusted.
Optionally, the adjusting module 405 is configured to:
adjusting the skin colors of all human faces in the human face image based on the target display information;
or when the face image comprises at least two faces and the face of the target user, adjusting the skin color of the face of the target user in the face image based on the target display information.
Optionally, the obtaining sub-module 4041 is configured to:
receiving the designated corresponding relation sent by the server, wherein the designated corresponding relation is established by the server based on the environment light information, the acquired structural light intensity and the corresponding face display information acquired by each sample terminal for multiple times;
each sample terminal is used for projecting structured light to the face of a shooting area of a camera of the sample terminal after the camera of the sample terminal is started every time, collecting ambient light information, determining the intensity of the structured light reflected by the face, and determining face display information, wherein the model of the sample terminal is the same as that of the terminal.
Optionally, the obtaining sub-module 4041 is configured to:
determining a current image display mode;
and determining the designated corresponding relation corresponding to the current image display mode in a plurality of preset corresponding relations, wherein the plurality of preset corresponding relations correspond to a plurality of image display modes one to one.
Optionally, the face display information includes display information of at least two face regions, and the display information includes at least one of brightness information, contrast information, transparency information, color information, and a filtering algorithm.
The image processing device provided by the embodiment of the disclosure, a first acquisition module can acquire a face image through a camera of a terminal, a second acquisition module acquires current ambient light information, a first determination module projects structured light to a face and determines the intensity of the current structured light reflected by the face, an acquisition submodule acquires an appointed corresponding relation based on the acquired current ambient light information and the intensity of the current structured light, an inquiry submodule acquires the appointed corresponding relation to inquire to obtain target display information, an adjustment module displays the face image based on the target display information, the image processing device combines a structured light technology and a portrait processing technology, the application of the structured light technology in the field of face image processing is realized, and the application scene of the structured light technology is enriched.
The embodiment of the present disclosure provides an image processing apparatus 50, as shown in fig. 15, the apparatus 50 including:
a processor 501;
a memory 502 for storing executable instructions of the processor;
wherein the processor is configured to execute any of the image processing methods provided by the embodiments of the present disclosure.
An embodiment of the present disclosure provides an image processing system, including:
a terminal and a server, wherein the terminal is the image processing device shown in any one of fig. 12 to 14;
alternatively, the terminal is an image processing apparatus shown in fig. 15. Alternatively, the system may be in an implementation environment as shown in FIG. 7.
Fig. 16 is a block diagram illustrating an apparatus 60 for image processing according to an exemplary embodiment. For example, the apparatus 60 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 16, the apparatus 60 may include one or more of the following components: processing component 602, memory 604, power component 606, multimedia component 608, audio component 610, input/output (I/O) interface 612, sensor component 614, and communication component 616.
The processing component 602 generally controls overall operation of the device 60, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 can include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operations at the apparatus 60. Examples of such data include instructions for any application or method operating on the device 60, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 604 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power supply component 606 provides power to the various components of device 60. Power components 606 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for device 60.
The multimedia component 608 includes a screen that provides an output interface between the device 60 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 60 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 610 is configured to output and/or input audio signals. For example, audio component 610 includes a Microphone (MIC) configured to receive external audio signals when apparatus 60 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 614 includes one or more sensors for providing various aspects of status assessment for the device 60. For example, the sensor assembly 614 may detect the open/closed status of the device 60, the relative positioning of the components, such as the display and keypad of the device 60, the sensor assembly 614 may also detect a change in the position of the device 60 or a component of the device 60, the presence or absence of user contact with the device 60, the orientation or acceleration/deceleration of the device 60, and a change in the temperature of the device 60. The sensor assembly 614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communications between the apparatus 60 and other devices in a wired or wireless manner. The device 60 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 60 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as the memory 604 including instructions executable by the processor 620 of the apparatus 60 to perform the above-described method. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium having instructions therein that, when executed by a processor of apparatus 60, enable apparatus 60 to perform an image processing method.
Fig. 17 is a block diagram illustrating an apparatus 70 for image processing according to an exemplary embodiment. For example, the apparatus 70 may be provided as a server. Referring to fig. 17, the apparatus 70 includes a processing component 722 that further includes one or more processors and memory resources, represented by memory 732, for storing instructions, such as application programs, that are executable by the processing component 722. The application programs stored in memory 732 may include one or more modules that each correspond to a set of instructions. Further, the processing component 722 is configured to execute instructions to perform the image processing methods described above.
The device 70 may also include a power component 726 configured to perform power management of the device 70, a wired or wireless network interface 750 configured to connect the device 70 to a network, and an input output (I/O) interface 758. The device 70 may operate based on an operating system stored in the memory 732, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (19)

1. An image processing method, comprising:
acquiring a face image through a camera of a terminal;
acquiring current environment light information of the terminal, wherein the current environment light information is used for reflecting brightness information of the current environment of the terminal;
projecting structured light to a face and determining an intensity of current structured light reflected by the face, the intensity of the current structured light reflected by the face comprising intensities of structured light of a plurality of face regions on the face, the plurality of face regions being divided into based on five sense organs: eye, nose, ear, forehead, cheek, and chin regions;
acquiring a specified corresponding relation;
inquiring the specified corresponding relation based on the current ambient light information and the intensity of the current structured light to obtain target display information, wherein the specified corresponding relation records the corresponding relation of the ambient light information, the structured light intensity and the face display information;
and displaying the face image based on the target display information.
2. The method of claim 1, wherein obtaining the specified correspondence comprises:
in a designated training period, after the camera is started every time, projecting structured light to the face of a shooting area of the camera;
collecting ambient light information and determining the intensity of structured light reflected by a human face;
determining face display information;
and establishing the specified corresponding relation based on the environment light information collected for multiple times, the collected structural light intensity and the corresponding face display information.
3. The method of claim 2,
the determining face display information includes:
determining a plurality of alternative facial display information based on the collected ambient light information and the collected structured light intensity;
acquiring a face image to be adjusted through a camera of the terminal;
displaying the face image to be adjusted based on the plurality of alternative face display information respectively;
receiving a selection instruction aiming at a target face image, wherein the target face image is an image in a plurality of displayed face images to be adjusted;
and determining the alternative face display information corresponding to the target face image as the face display information.
4. The method of claim 3,
through the camera acquisition of terminal waits to adjust the face image, include:
carrying out face recognition on a user using the terminal through a structured light technology;
when the user using the terminal comprises a target user after identification, acquiring a face image of the target user through a camera of the terminal, and determining the acquired face image as the face image to be adjusted.
5. The method of claim 4,
the displaying the face image based on the target display information includes:
adjusting the skin colors of all human faces in the human face image based on the target display information;
or when the face image comprises at least two faces and the face of the target user, adjusting the skin color of the face of the target user in the face image based on the target display information.
6. The method of claim 1, wherein obtaining the specified correspondence comprises:
receiving the designated corresponding relation sent by the server, wherein the designated corresponding relation is established by the server based on the environment light information, the acquired structural light intensity and the corresponding face display information acquired by each sample terminal for multiple times;
each sample terminal is used for projecting structured light to the face of a shooting area of a camera of the sample terminal after the camera of the sample terminal is started every time, collecting ambient light information, determining the intensity of the structured light reflected by the face, and determining face display information, wherein the model of the sample terminal is the same as that of the terminal.
7. The method of claim 1, wherein obtaining the specified correspondence comprises:
determining a current image display mode;
and determining the designated corresponding relation corresponding to the current image display mode in a plurality of preset corresponding relations, wherein the plurality of preset corresponding relations correspond to a plurality of image display modes one to one.
8. The method according to any one of claims 1 to 7, wherein the facial display information includes display information of at least two face regions, the display information including at least one of brightness information, contrast information, transparency information, color information, and a filtering algorithm.
9. An image processing apparatus characterized by comprising:
the terminal comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is configured to acquire a face image through a camera of the terminal;
the second acquisition module is configured to acquire current ambient light information of the terminal, wherein the current ambient light information is used for reflecting brightness information of the current environment of the terminal;
a first determination module configured to project structured light toward a human face and determine an intensity of current structured light reflected by the human face, the intensity of the current structured light reflected by the human face comprising intensities of structured light of a plurality of human face regions on the human face, the plurality of human face regions being divided based on five sense organs: eye, nose, ear, forehead, cheek, and chin regions;
a second determination module configured to determine target display information based on the current ambient light information and the intensity of the current structured light;
an adjustment module configured to display the face image based on the target display information;
the second determining module includes:
an acquisition submodule configured to acquire a specified correspondence;
a query sub-module configured to query the specified correspondence, which records a correspondence of ambient light information, structured light intensity, and face display information, based on the current ambient light information and the intensity of the current structured light to obtain the target display information.
10. The apparatus of claim 9, wherein the acquisition submodule comprises:
the structured light projection unit is configured to project structured light to the face of the shooting area of the camera after the camera is started every time in a specified training period;
a structured light collection unit configured to collect ambient light information and determine an intensity of structured light reflected by a human face;
a determination unit configured to determine face display information;
an establishing unit configured to establish the specified correspondence relationship based on the plural times of collected ambient light information, collected structured light intensity, and corresponding face display information.
11. The apparatus of claim 10,
the determination unit is configured to:
determining a plurality of alternative facial display information based on the collected ambient light information and the collected structured light intensity;
acquiring a face image to be adjusted through a camera of the terminal;
displaying the face image to be adjusted based on the plurality of alternative face display information respectively;
receiving a selection instruction aiming at a target face image, wherein the target face image is an image in a plurality of displayed face images to be adjusted;
and determining the alternative face display information corresponding to the target face image as the face display information.
12. The apparatus of claim 11,
the determination unit is configured to:
carrying out face recognition on a user using the terminal through a structured light technology;
when the user using the terminal comprises a target user after identification, acquiring a face image of the target user through a camera of the terminal, and determining the acquired face image as the face image to be adjusted.
13. The apparatus of claim 12,
the adjustment module configured to:
adjusting the skin colors of all human faces in the human face image based on the target display information;
or when the face image comprises at least two faces and the face of the target user, adjusting the skin color of the face of the target user in the face image based on the target display information.
14. The apparatus of claim 9, wherein the acquisition submodule is configured to:
receiving the designated corresponding relation sent by the server, wherein the designated corresponding relation is established by the server based on the environment light information, the acquired structural light intensity and the corresponding face display information acquired by each sample terminal for multiple times;
each sample terminal is used for projecting structured light to the face of a shooting area of a camera of the sample terminal after the camera of the sample terminal is started every time, collecting ambient light information, determining the intensity of the structured light reflected by the face, and determining face display information, wherein the model of the sample terminal is the same as that of the terminal.
15. The apparatus of claim 9, wherein the acquisition submodule is configured to:
determining a current image display mode;
and determining the designated corresponding relation corresponding to the current image display mode in a plurality of preset corresponding relations, wherein the plurality of preset corresponding relations correspond to a plurality of image display modes one to one.
16. The apparatus of any of claims 9 to 15, wherein the facial display information comprises display information of at least two human face regions, the display information comprising at least one of brightness information, contrast information, transparency information, color information, and a filtering algorithm.
17. An image processing apparatus, characterized in that the apparatus comprises:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of image processing of any of claims 1 to 8.
18. A computer-readable storage medium having stored thereon instructions which, when run on a processing component, cause the processing component to execute the method of image processing according to any one of claims 1 to 8.
19. An image processing system, characterized in that the system comprises:
a terminal and a server, wherein the terminal is the image processing device according to any one of claims 9 to 16;
alternatively, the terminal is the image processing apparatus according to claim 17.
CN201810841958.1A 2018-07-27 2018-07-27 Image processing method, device and system Active CN108965849B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810841958.1A CN108965849B (en) 2018-07-27 2018-07-27 Image processing method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810841958.1A CN108965849B (en) 2018-07-27 2018-07-27 Image processing method, device and system

Publications (2)

Publication Number Publication Date
CN108965849A CN108965849A (en) 2018-12-07
CN108965849B true CN108965849B (en) 2021-05-04

Family

ID=64465907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810841958.1A Active CN108965849B (en) 2018-07-27 2018-07-27 Image processing method, device and system

Country Status (1)

Country Link
CN (1) CN108965849B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107124548A (en) * 2017-04-25 2017-09-01 深圳市金立通信设备有限公司 A kind of photographic method and terminal
CN107302662A (en) * 2017-07-06 2017-10-27 维沃移动通信有限公司 A kind of method, device and mobile terminal taken pictures
CN107341481A (en) * 2017-07-12 2017-11-10 深圳奥比中光科技有限公司 It is identified using structure light image
CN107679482A (en) * 2017-09-27 2018-02-09 广东欧珀移动通信有限公司 Solve lock control method and Related product
CN107705245A (en) * 2017-10-13 2018-02-16 北京小米移动软件有限公司 Image processing method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7619626B2 (en) * 2003-03-01 2009-11-17 The Boeing Company Mapping images from one or more sources into an image for display
US9667854B2 (en) * 2014-12-31 2017-05-30 Beijing Lenovo Software Ltd. Electornic device and information processing unit

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107124548A (en) * 2017-04-25 2017-09-01 深圳市金立通信设备有限公司 A kind of photographic method and terminal
CN107302662A (en) * 2017-07-06 2017-10-27 维沃移动通信有限公司 A kind of method, device and mobile terminal taken pictures
CN107341481A (en) * 2017-07-12 2017-11-10 深圳奥比中光科技有限公司 It is identified using structure light image
CN107679482A (en) * 2017-09-27 2018-02-09 广东欧珀移动通信有限公司 Solve lock control method and Related product
CN107705245A (en) * 2017-10-13 2018-02-16 北京小米移动软件有限公司 Image processing method and device

Also Published As

Publication number Publication date
CN108965849A (en) 2018-12-07

Similar Documents

Publication Publication Date Title
US10565763B2 (en) Method and camera device for processing image
CN108986199B (en) Virtual model processing method and device, electronic equipment and storage medium
CN108419016B (en) Shooting method and device and terminal
CN109360261B (en) Image processing method, image processing device, electronic equipment and storage medium
CN106210496B (en) Photo shooting method and device
JP2016531362A (en) Skin color adjustment method, skin color adjustment device, program, and recording medium
EP3944607A1 (en) Image acquisition module, electronic device, image acquisition method and storage medium
CN108810422B (en) Light supplementing method and device for shooting environment and computer readable storage medium
US11310443B2 (en) Video processing method, apparatus and storage medium
CN111953903A (en) Shooting method, shooting device, electronic equipment and storage medium
CN111901459A (en) Color temperature adjusting method, device, terminal and storage medium
CN109167921B (en) Shooting method, shooting device, shooting terminal and storage medium
EP3799415A2 (en) Method and device for processing videos, and medium
CN108965849B (en) Image processing method, device and system
CN111835941A (en) Image generation method and device, electronic equipment and computer readable storage medium
CN115914721A (en) Live broadcast picture processing method and device, electronic equipment and storage medium
US11252341B2 (en) Method and device for shooting image, and storage medium
US11617023B2 (en) Method for brightness enhancement of preview image, apparatus, and medium
CN111277754B (en) Mobile terminal shooting method and device
CN114187874A (en) Brightness adjusting method and device and storage medium
CN114338956A (en) Image processing method, image processing apparatus, and storage medium
CN111385400A (en) Backlight brightness adjusting method and device
CN111835977A (en) Image sensor, image generation method and device, electronic device, and storage medium
CN113254118B (en) Skin color display device
CN113055605B (en) Image color temperature adjusting method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant