CN114693553A - Mobile intelligent terminal image processing method and system - Google Patents
Mobile intelligent terminal image processing method and system Download PDFInfo
- Publication number
- CN114693553A CN114693553A CN202210312454.7A CN202210312454A CN114693553A CN 114693553 A CN114693553 A CN 114693553A CN 202210312454 A CN202210312454 A CN 202210312454A CN 114693553 A CN114693553 A CN 114693553A
- Authority
- CN
- China
- Prior art keywords
- pixel
- current
- image
- target
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims description 17
- 238000012545 processing Methods 0.000 claims abstract description 40
- 238000000034 method Methods 0.000 claims abstract description 15
- 230000008859 change Effects 0.000 claims description 19
- 230000002776 aggregation Effects 0.000 claims description 17
- 238000004220 aggregation Methods 0.000 claims description 17
- 239000011159 matrix material Substances 0.000 claims description 16
- 239000013598 vector Substances 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 10
- 230000011218 segmentation Effects 0.000 claims description 9
- 238000011156 evaluation Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 238000004458 analytical method Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 230000009286 beneficial effect Effects 0.000 description 8
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 239000013074 reference sample Substances 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method and a system for processing images of a mobile intelligent terminal, wherein the method comprises the following steps: the method comprises the steps of dividing a current display image of the mobile intelligent terminal into N equal-area target areas, determining the current brightness characteristic of the image in each target area, collecting the current face image of a target person watching the mobile intelligent terminal, extracting eye characteristic points from the current face image, and intelligently adjusting and processing the current display image based on the current brightness characteristic and the eye characteristic points of each area. The resolution, brightness and the like of the current display image are intelligently adjusted and enhanced by combining the brightness characteristic of the current display image of the mobile intelligent terminal with the human eye characteristic points of the target person in front of the mobile intelligent terminal, so that the intelligent processing work can be intelligently carried out according to the actual condition of a viewer, the viewing experience of the user is ensured, the practicability is improved, and the experience of the user is also improved.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a system for processing an image of a mobile intelligent terminal.
Background
With the continuous development and maturity of intelligent terminal software and hardware technology, various mobile intelligent terminals emerge in succession and are accepted by most users, people can use the mobile intelligent terminal to carry out video chat or watch videos, live broadcasts and the like when in leisure, the amateur life of people is greatly enriched, but because the images transmitted by the mobile intelligent terminal have the problem of distortion, the display images on the mobile intelligent terminal need to be processed to ensure the definition, the existing image processing method simply strengthens the display images, the actual watching situation of the users is not considered, so that the watching experience of part of users on the display images is very poor, and the experience of the users is reduced.
Disclosure of Invention
Aiming at the displayed problem, the invention provides a mobile intelligent terminal image processing method and a mobile intelligent terminal image processing system, which are used for solving the problems that the existing image processing method in the background art only enhances the displayed image, does not consider the actual watching situation of the user, so that the watching experience of part of users on the displayed image is very poor, and the experience of the users is reduced.
A mobile intelligent terminal image processing method comprises the following steps:
dividing a current display image of the mobile intelligent terminal into N target areas with equal areas;
determining the current brightness characteristic of the image in each target area;
acquiring a current face image of a target person watching a mobile intelligent terminal, and extracting eye feature points from the current face image;
and intelligently adjusting and processing the current display image based on the current brightness characteristic and the human eye characteristic point of each region.
Preferably, the dividing the current display image of the mobile intelligent terminal into N equal-area target regions includes:
performing point cloud data detection on the current display image to obtain a detection result;
determining point cloud data distribution in the current display image according to the detection result;
determining the segmentation form and the number of N segmentation areas of the current display image according to the point cloud data distribution and a preset rule;
and dividing the current display image into N target regions with equal areas according to the division form.
Preferably, the determining the current brightness characteristic of the image in each target region includes:
extracting a characteristic factor of each pixel of the image in each target area;
constructing an image parameter matrix of each target area according to the characteristic factor of each pixel of the image in each target area;
determining a brightness parameter corresponding to each pixel according to each matrix factor parameter in the image parameter matrix;
matching in a preset database according to the brightness parameter corresponding to each pixel to obtain a brightness characteristic value corresponding to the pixel;
and calculating the average brightness characteristic value of each target area, and determining the current brightness characteristic of the target area according to the ratio of the average brightness characteristic value to the standard brightness characteristic value.
Preferably, the acquiring and viewing a current face image of a target person of the mobile intelligent terminal, and extracting eye feature points from the current face image include:
extracting the region of the eyes of a target person in the current face image;
determining a target deviation boundary of an area where the eyes of a target person are located according to the pixel proportion of the current image;
adjusting the area of the eyes of the target person according to the target deviation boundary to obtain an adjusted eye area;
and extracting the human eye characteristic points of the target person in the adjusted human eye area according to preset human eye characteristic parameters.
Preferably, the intelligently adjusting and processing the currently displayed image based on the current brightness feature and the human eye feature point of each region includes:
determining a human eye watching area of a target user according to the human eye characteristic points;
determining the current vision index of a target user, and adjusting the brightness of the current display image according to the current vision index and the current brightness characteristic of each area;
evaluating the display definition of the current brightness characteristic of each region to the target user based on the current vision index of the target user to obtain an evaluation result;
and intelligently adjusting the display scale/resolution of the current display image or performing enhancement processing on the current display image according to the evaluation result.
Preferably, the determining the target deviation boundary of the area where the human eyes of the target person are located according to the pixel proportion of the current image includes:
determining the area where the eyes of the target person are located and mask data around the area according to the pixel proportion of the current image;
modifying first mask data of an area where eyes of a target person are located, and acquiring second mask data change parameters around the area where the eyes of the target person are located;
constructing a cost function of the parameter change of the region where the human eyes of the target person are located to drive the surrounding parameters to change according to the change parameters;
screening out target second mask data of which the second mask data change parameters around the region where the eyes of the target person are located are out of a preset range, and determining a first deviation boundary according to an interval corresponding to the target second mask data;
calculating a variation error of second mask data around the region where the eyes of the target person are located along with the variation of the first mask data of the region where the eyes of the target person are located according to the cost function;
correcting the first deviation boundary according to the change error to obtain a second deviation boundary;
determining the second deviation boundary as a target deviation boundary.
Preferably, before intelligently adjusting and processing the currently displayed image based on the current brightness characteristic and the human eye characteristic point of each region, the method further comprises:
determining the space aggregation characteristics of pixel points in the current display image according to the characteristic factors of each pixel of the current display image;
determining the aggregation characteristics of pixel points in the current display image according to the space aggregation characteristics;
determining the low-dimensional pixel representation distribution in the current display image based on the aggregation characteristics;
extracting deep features of each low-dimensional pixel in the current display image according to the low-dimensional pixel representation distribution;
and taking the deep features of each low-dimensional pixel as the to-be-processed features of the current display image for intelligent adjustment and processing.
Preferably, the determining of the area where the eyes of the target person are located and the mask data around the area according to the pixel proportion of the current image includes:
determining configuration information of equipment for shooting the current image according to the pixel proportion of the current image;
generating a mask matrix of a shot image of the equipment according to the configuration information;
determining a mask vector of each pixel according to the mask matrix and the pixel value of each pixel point of the current image;
grouping the mask vectors of each pixel according to the vector characteristics of the mask vectors of each pixel to obtain a grouping result;
determining the grouping pixel aggregation condition in each grouping result, and acquiring a first pixel at the periphery of the grouping pixel and a second pixel around the first pixel;
calculating phase coherence between each first pixel and each second pixel;
marking a target second pixel of which the phase coherence calculation result between the first pixel and the second pixel is smaller than a preset threshold value;
determining a pixel mask bit in the current image according to the marking condition of the target second pixel;
acquiring human eye pixel characteristics, and matching in the aggregated grouped pixels of the current image according to the human eye pixel characteristics to plan the region of the human eyes of a target person;
and performing pixel analysis on pixel mask bits in the current image to obtain mask data around the area where the human eyes of the target person are located.
A mobile intelligent terminal image processing system comprises:
the segmentation module is used for segmenting a current display image of the mobile intelligent terminal into N target areas with equal areas;
the determining module is used for determining the current brightness characteristic of the image in each target area;
the system comprises an extraction module, a display module and a display module, wherein the extraction module is used for collecting a current face image of a target person watching a mobile intelligent terminal and extracting human eye feature points from the current face image;
and the processing module is used for intelligently adjusting and processing the current display image based on the current brightness characteristic and the human eye characteristic point of each region.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
FIG. 1 is a flowchart illustrating an image processing method of a mobile intelligent terminal according to the present invention;
FIG. 2 is another flowchart of a method for processing an image of a mobile intelligent terminal according to the present invention;
FIG. 3 is a flowchart illustrating a method for processing an image of a mobile intelligent terminal according to the present invention;
fig. 4 is a schematic structural diagram of an image processing system of a mobile intelligent terminal according to the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
With the continuous development and maturity of intelligent terminal software and hardware technology, various mobile intelligent terminals emerge in succession and are accepted by most users, people can use the mobile intelligent terminal to carry out video chat or watch videos, live broadcasts and the like when in leisure, the amateur life of people is greatly enriched, but because the images transmitted by the mobile intelligent terminal have the problem of distortion, the display images on the mobile intelligent terminal need to be processed to ensure the definition, the existing image processing method simply strengthens the display images, the actual watching situation of the users is not considered, so that the watching experience of part of users on the display images is very poor, and the experience of the users is reduced. In order to solve the above problem, the embodiment discloses a method for processing an image of a mobile intelligent terminal.
A mobile intelligent terminal image processing method is shown in FIG. 1, and comprises the following steps:
s101, segmenting a current display image of the mobile intelligent terminal into N target areas with equal areas;
s102, determining the current brightness characteristics of the image in each target area;
s103, collecting a current face image of a target person watching the mobile intelligent terminal, and extracting eye feature points from the current face image;
and S104, intelligently adjusting and processing the current display image based on the current brightness characteristic and the human eye characteristic point of each region.
The working principle of the technical scheme is as follows: the method comprises the steps of dividing a current display image of the mobile intelligent terminal into N equal-area target areas, determining the current brightness characteristic of the image in each target area, collecting the current face image of a target person watching the mobile intelligent terminal, extracting eye characteristic points from the current face image, and intelligently adjusting and processing the current display image based on the current brightness characteristic and the eye characteristic points of each area.
The beneficial effects of the above technical scheme are: the resolution, brightness and the like of the current display image are intelligently adjusted and enhanced according to the brightness characteristic of the current display image of the mobile intelligent terminal and the human eye characteristic point of a target person in front of the mobile intelligent terminal, so that the intelligent processing work can be intelligently performed according to the actual situation of a viewer, the viewing experience of the user is ensured, the experience of the user is improved while the practicability is improved, the problem that the viewing experience of part of users on the display image is poor due to the fact that the actual viewing situation of the user is not considered in the prior art is solved, and the problem of the experience of the user is reduced.
In one embodiment, as shown in fig. 2, the segmenting the currently displayed image of the mobile intelligent terminal into N equal-area target regions includes:
step S201, point cloud data detection is carried out on the current display image, and a detection result is obtained;
step S202, determining point cloud data distribution in the current display image according to the detection result;
step S203, determining the segmentation form and the number of N segmentation areas of the current display image according to the point cloud data distribution and a preset rule;
and S204, dividing the current display image into N target areas with equal area according to the division form.
The beneficial effects of the above technical scheme are: the cutting mode and the number of the cutting areas of the current display image are determined according to the point cloud data distribution, so that the point cloud data distribution in each cutting area can be ensured to be uniform, a foundation is laid for subsequent image processing, and the consistency and the objectivity of a processed sample are ensured.
In one embodiment, the determining the current brightness characteristic of the image in each target region includes:
extracting a characteristic factor of each pixel of the image in each target area;
constructing an image parameter matrix of each target area according to the characteristic factor of each pixel of the image in each target area;
determining a brightness parameter corresponding to each pixel according to each matrix factor parameter in the image parameter matrix;
matching in a preset database according to the brightness parameter corresponding to each pixel to obtain a brightness characteristic value corresponding to the pixel;
and calculating the average brightness characteristic value of each target area, and determining the current brightness characteristic of the target area according to the ratio of the average brightness characteristic value to the standard brightness characteristic value.
The beneficial effects of the above technical scheme are: the brightness characteristic value of each pixel is obtained by utilizing the parameter matrix, so that the average brightness characteristic value of each target area is determined, the current brightness characteristic of each target area can be comprehensively determined by taking each pixel in each target area into consideration, and the reasonability and objectivity of the final result are ensured.
In an embodiment, as shown in fig. 3, the acquiring a current face image of a target person watching a mobile intelligent terminal, and extracting eye feature points from the current face image includes:
s301, extracting the region of the eyes of a target person in the current face image;
step S302, determining a target deviation boundary of a region where the eyes of a target person are located according to the pixel proportion of the current image;
step S303, adjusting the area where the eyes of the target person are located according to the target deviation boundary to obtain an adjusted eye area;
and S304, extracting the human eye characteristic points of the target person in the adjusted human eye area according to preset human eye characteristic parameters.
The beneficial effects of the above technical scheme are: the method can effectively avoid the influence caused by errors by determining the target deviation boundary of the region where the human eyes of the target person are located, thereby more accurately extracting the human eye characteristic points of the target person, improving the extraction precision, effectively avoiding the error condition of region range determination, and further improving the practicability.
In one embodiment, the intelligently adjusting and processing the currently displayed image based on the current brightness characteristic and the human eye characteristic point of each region includes:
determining a human eye watching area of a target user according to the human eye characteristic points;
determining the current vision index of a target user, and adjusting the brightness of the current display image according to the current vision index and the current brightness characteristic of each area;
evaluating the display definition of the current brightness characteristic of each area to the target user based on the current vision index of the target user to obtain an evaluation result;
and intelligently adjusting the display scale/resolution of the current display image or performing enhancement processing on the current display image according to the evaluation result.
The beneficial effects of the above technical scheme are: by selectively processing the current display image according to the current brightness characteristic of the current display image and the vision index of the target person, reasonable processing means can be further made according to the actual needs of the target user, and the practicability is further improved.
In one embodiment, the determining a target deviation boundary of a region where the eyes of the target person are located according to the pixel proportion of the current image includes:
determining the area where the eyes of the target person are located and mask data around the area according to the pixel proportion of the current image;
modifying first mask data of an area where eyes of a target person are located, and acquiring second mask data change parameters around the area where the eyes of the target person are located;
constructing a cost function of the parameter change of the region where the human eyes of the target person are located to drive the surrounding parameters to change according to the change parameters;
screening out target second mask data of which the second mask data change parameters around the region where the eyes of the target person are located are out of a preset range, and determining a first deviation boundary according to an interval corresponding to the target second mask data;
calculating a variation error of second mask data around the region where the eyes of the target person are located along with the variation of the first mask data of the region where the eyes of the target person are located according to the cost function;
correcting the first deviation boundary according to the change error to obtain a second deviation boundary;
determining the second deviation boundary as a target deviation boundary.
The beneficial effects of the above technical scheme are: the error can be further avoided by constructing a cost function to correct the first deviation boundary of the second mask data around the region where the human eyes of the target person are located along with the change of the first mask data of the region where the human eyes of the target person are located, so that the accuracy is improved, meanwhile, the missing region is avoided, and the stability is improved.
In one embodiment, before intelligently adjusting and processing the currently displayed image based on the current brightness characteristic and the human eye characteristic point of each region, the method further comprises:
determining the space aggregation characteristics of pixel points in the current display image according to the characteristic factors of each pixel of the current display image;
determining the aggregation characteristics of pixel points in the current display image according to the space aggregation characteristics;
determining the low-dimensional pixel representation distribution in the current display image based on the aggregation characteristics;
extracting deep features of each low-dimensional pixel in the current display image according to the low-dimensional pixel representation distribution;
and taking the deep features of each low-dimensional pixel as the to-be-processed features of the current display image for intelligent adjustment and processing.
The beneficial effects of the above technical scheme are: the to-be-processed characteristics of the current display image subjected to intelligent adjustment and processing are determined, so that the to-be-processed characteristics can be processed emphatically when the current display image is processed subsequently, and the processing efficiency is improved.
In one embodiment, the determining the area where the eyes of the target person are located and the mask data around the area according to the pixel proportion of the current image includes:
determining configuration information of equipment for shooting the current image according to the pixel proportion of the current image;
generating a mask matrix of a shot image of the equipment according to the configuration information;
determining a mask vector of each pixel according to the mask matrix and the pixel value of each pixel point of the current image;
grouping the mask vectors of each pixel according to the vector characteristics of the mask vectors of each pixel to obtain a grouping result;
determining the grouping pixel aggregation condition in each grouping result, and acquiring a first pixel at the periphery of the grouping pixel and a second pixel around the first pixel;
calculating phase coherence between each first pixel and each second pixel;
marking a target second pixel of which the phase coherence calculation result between the first pixel and the second pixel is smaller than a preset threshold value;
determining a pixel mask bit in the current image according to the marking condition of the target second pixel;
acquiring human eye pixel characteristics, and matching in the aggregated grouped pixels of the current image according to the human eye pixel characteristics to plan the region of the human eyes of a target person;
and performing pixel analysis on pixel mask bits in the current image to obtain mask data around the area where the human eyes of the target person are located.
The beneficial effects of the above technical scheme are: the area where the eyes of the target person are located can be visually determined according to the pixel aggregation condition of the current image, the area where the eyes of the target person are located can be further determined accurately according to the pixel aggregation condition, the accuracy of the detection result is improved, furthermore, the irrelevant mask area in the current image can be determined according to the pixel distribution parameters by determining the pixel mask bits in the current image, and an effective reference sample is provided for subsequently acquiring mask data.
The embodiment also discloses a mobile intelligent terminal image processing system, as shown in fig. 4, the system includes:
the segmentation module 401 is configured to segment a current display image of the mobile intelligent terminal into N equal-area target regions;
a determining module 402, configured to determine a current brightness characteristic of the image in each target region;
an extraction module 403, configured to collect a current face image of a target person watching a mobile intelligent terminal, and extract an eye feature point from the current face image;
and the processing module 404 is configured to intelligently adjust and process the currently displayed image based on the current brightness feature and the human eye feature point of each region.
The working principle and the advantageous effects of the above technical solution have been explained in the method claims, and are not described herein again.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (9)
1. A mobile intelligent terminal image processing method is characterized by comprising the following steps:
dividing a current display image of the mobile intelligent terminal into N target areas with equal areas;
determining the current brightness characteristic of the image in each target area;
acquiring a current face image of a target person watching a mobile intelligent terminal, and extracting eye feature points from the current face image;
and intelligently adjusting and processing the current display image based on the current brightness characteristic and the human eye characteristic point of each region.
2. The image processing method of the mobile intelligent terminal according to claim 1, wherein the dividing the current display image of the mobile intelligent terminal into N equal-area target regions comprises:
performing point cloud data detection on the current display image to obtain a detection result;
determining point cloud data distribution in the current display image according to the detection result;
determining the segmentation form and the number of N segmentation areas of the current display image according to the point cloud data distribution and a preset rule;
and dividing the current display image into N target regions with equal areas according to the division form.
3. The image processing method of the mobile intelligent terminal according to claim 1, wherein the determining the current brightness characteristic of the image in each target area comprises:
extracting a characteristic factor of each pixel of the image in each target area;
constructing an image parameter matrix of each target area according to the characteristic factors of each pixel of the image in each target area;
determining a brightness parameter corresponding to each pixel according to each matrix factor parameter in the image parameter matrix;
matching in a preset database according to the brightness parameter corresponding to each pixel to obtain a brightness characteristic value corresponding to the pixel;
and calculating the average brightness characteristic value of each target area, and determining the current brightness characteristic of the target area according to the ratio of the average brightness characteristic value to the standard brightness characteristic value.
4. The image processing method of the mobile intelligent terminal according to claim 1, wherein the acquiring a current face image of a target person watching the mobile intelligent terminal and extracting eye feature points from the current face image comprises:
extracting the region of the eyes of a target person in the current face image;
determining a target deviation boundary of an area where the eyes of a target person are located according to the pixel proportion of the current image;
adjusting the region of the human eye of the target person according to the target deviation boundary to obtain an adjusted human eye region;
and extracting the human eye characteristic points of the target person in the adjusted human eye area according to preset human eye characteristic parameters.
5. The image processing method of the mobile intelligent terminal according to claim 1, wherein the intelligently adjusting and processing the currently displayed image based on the current brightness characteristic and the human eye characteristic point of each area comprises:
determining a human eye watching area of a target user according to the human eye characteristic points;
determining the current vision index of a target user, and adjusting the brightness of the current display image according to the current vision index and the current brightness characteristic of each area;
evaluating the display definition of the current brightness characteristic of each region to the target user based on the current vision index of the target user to obtain an evaluation result;
and intelligently adjusting the display scale/resolution of the current display image or performing enhancement processing on the current display image according to the evaluation result.
6. The image processing method of the mobile intelligent terminal according to claim 4, wherein the determining of the target deviation boundary of the region where the eyes of the target person are located according to the pixel proportion of the current image comprises:
determining the area where the eyes of the target person are located and mask data around the area according to the pixel proportion of the current image;
modifying first mask data of an area where eyes of a target person are located, and acquiring second mask data change parameters around the area where the eyes of the target person are located;
constructing a cost function of the parameter change of the region where the human eyes of the target person are located to drive the surrounding parameters to change according to the change parameters;
screening out target second mask data of which the second mask data change parameters around the region where the eyes of the target person are located are out of a preset range, and determining a first deviation boundary according to an interval corresponding to the target second mask data;
calculating a variation error of second mask data around the area where the eyes of the target person are located along with the first mask data of the area where the eyes of the target person are located according to the cost function;
correcting the first deviation boundary according to the change error to obtain a second deviation boundary;
determining the second deviation boundary as a target deviation boundary.
7. The image processing method of the mobile intelligent terminal, according to claim 3, before intelligently adjusting and processing the currently displayed image based on the current brightness feature and the human eye feature point of each region, the method further comprises:
determining the space aggregation characteristics of pixel points in the current display image according to the characteristic factors of each pixel of the current display image;
determining the aggregation characteristics of pixel points in the current display image according to the space aggregation characteristics;
determining the low-dimensional pixel representation distribution in the current display image based on the aggregation characteristics;
extracting deep features of each low-dimensional pixel in the current display image according to the low-dimensional pixel representation distribution;
and taking the deep features of each low-dimensional pixel as the to-be-processed features of the current display image for intelligent adjustment and processing.
8. The image processing method of the mobile intelligent terminal according to claim 6, wherein the determining of the mask data of the area where the eyes of the target person are located and the periphery of the area according to the pixel proportion of the current image comprises:
determining configuration information of equipment for shooting the current image according to the pixel proportion of the current image;
generating a mask matrix of a shot image of the equipment according to the configuration information;
determining a mask vector of each pixel according to the mask matrix and the pixel value of each pixel point of the current image;
grouping the mask vectors of each pixel according to the vector characteristics of the mask vectors of each pixel to obtain a grouping result;
determining the grouping pixel aggregation condition in each grouping result, and acquiring a first pixel at the periphery of the grouping pixel and a second pixel around the first pixel;
calculating phase coherence between each first pixel and each second pixel;
marking a target second pixel of which the phase coherence calculation result between the first pixel and the second pixel is smaller than a preset threshold value;
determining a pixel mask bit in the current image according to the marking condition of the target second pixel;
acquiring human eye pixel characteristics, and matching in the gathering grouped pixels of the current image according to the human eye pixel characteristics to plan the area where the human eyes of a target person are located;
and performing pixel analysis on pixel mask bits in the current image to obtain mask data around the area where the human eyes of the target person are located.
9. A mobile intelligent terminal image processing system is characterized by comprising:
the segmentation module is used for segmenting a current display image of the mobile intelligent terminal into N target areas with equal areas;
the determining module is used for determining the current brightness characteristic of the image in each target area;
the system comprises an extraction module, a display module and a display module, wherein the extraction module is used for collecting a current face image of a target person watching a mobile intelligent terminal and extracting human eye feature points from the current face image;
and the processing module is used for intelligently adjusting and processing the current display image based on the current brightness characteristic and the human eye characteristic point of each region.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210312454.7A CN114693553A (en) | 2022-03-28 | 2022-03-28 | Mobile intelligent terminal image processing method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210312454.7A CN114693553A (en) | 2022-03-28 | 2022-03-28 | Mobile intelligent terminal image processing method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114693553A true CN114693553A (en) | 2022-07-01 |
Family
ID=82140804
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210312454.7A Pending CN114693553A (en) | 2022-03-28 | 2022-03-28 | Mobile intelligent terminal image processing method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114693553A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115690892A (en) * | 2023-01-03 | 2023-02-03 | 京东方艺云(杭州)科技有限公司 | Squinting recognition method and device, electronic equipment and storage medium |
-
2022
- 2022-03-28 CN CN202210312454.7A patent/CN114693553A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115690892A (en) * | 2023-01-03 | 2023-02-03 | 京东方艺云(杭州)科技有限公司 | Squinting recognition method and device, electronic equipment and storage medium |
CN115690892B (en) * | 2023-01-03 | 2023-06-13 | 京东方艺云(杭州)科技有限公司 | Mitigation method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Shao et al. | Perceptual full-reference quality assessment of stereoscopic images by considering binocular visual characteristics | |
CN104658002B (en) | Non-reference image objective quality evaluation method | |
Xiang et al. | Blind night-time image quality assessment: Subjective and objective approaches | |
CN109686342B (en) | Image processing method and device | |
CN112017222A (en) | Video panorama stitching and three-dimensional fusion method and device | |
KR20110014067A (en) | Method and system for transformation of stereo content | |
CN113327234B (en) | Video redirection quality evaluation method based on space-time saliency classification and fusion | |
CN106993188B (en) | A kind of HEVC compaction coding method based on plurality of human faces saliency | |
CN113596573B (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
US11974050B2 (en) | Data simulation method and device for event camera | |
CN104902268A (en) | Non-reference three-dimensional image objective quality evaluation method based on local ternary pattern | |
CN115965889A (en) | Video quality assessment data processing method, device and equipment | |
Shao et al. | Stereoscopic visual attention guided seam carving for stereoscopic image retargeting | |
CN111641822B (en) | Method for evaluating quality of repositioning stereo image | |
CN110910365A (en) | Quality evaluation method for multi-exposure fusion image of dynamic scene and static scene simultaneously | |
CN113298779B (en) | Video redirection quality objective evaluation method based on reverse reconstruction grid | |
CN114693553A (en) | Mobile intelligent terminal image processing method and system | |
CN114202491B (en) | Method and system for enhancing optical image | |
Yang et al. | EHNQ: Subjective and objective quality evaluation of enhanced night-time images | |
CN105488792A (en) | No-reference stereo image quality evaluation method based on dictionary learning and machine learning | |
CN109167988B (en) | Stereo image visual comfort evaluation method based on D + W model and contrast | |
CN109978859B (en) | Image display adaptation quality evaluation method based on visible distortion pooling | |
CN114881889A (en) | Video image noise evaluation method and device | |
Liu et al. | Perceptual Quality Assessment of Omnidirectional Images: A Benchmark and Computational Model | |
CN112950479B (en) | Image gray level region stretching algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |