CN114693553A - Mobile intelligent terminal image processing method and system - Google Patents

Mobile intelligent terminal image processing method and system Download PDF

Info

Publication number
CN114693553A
CN114693553A CN202210312454.7A CN202210312454A CN114693553A CN 114693553 A CN114693553 A CN 114693553A CN 202210312454 A CN202210312454 A CN 202210312454A CN 114693553 A CN114693553 A CN 114693553A
Authority
CN
China
Prior art keywords
image
pixel
area
current
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210312454.7A
Other languages
Chinese (zh)
Inventor
张湃
王丽侠
任丽棉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tangshan University
Original Assignee
Tangshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tangshan University filed Critical Tangshan University
Priority to CN202210312454.7A priority Critical patent/CN114693553A/en
Publication of CN114693553A publication Critical patent/CN114693553A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a system for processing images of a mobile intelligent terminal, wherein the method comprises the following steps: the method comprises the steps of dividing a current display image of the mobile intelligent terminal into N equal-area target areas, determining the current brightness characteristic of the image in each target area, collecting the current face image of a target person watching the mobile intelligent terminal, extracting eye characteristic points from the current face image, and intelligently adjusting and processing the current display image based on the current brightness characteristic and the eye characteristic points of each area. The resolution, brightness and the like of the current display image are intelligently adjusted and enhanced by combining the brightness characteristic of the current display image of the mobile intelligent terminal with the human eye characteristic points of the target person in front of the mobile intelligent terminal, so that the intelligent processing work can be intelligently carried out according to the actual condition of a viewer, the viewing experience of the user is ensured, the practicability is improved, and the experience of the user is also improved.

Description

一种移动智能终端图像处理方法及系统A kind of mobile intelligent terminal image processing method and system

技术领域technical field

本发明涉及图像处理技术领域,尤其涉及一种移动智能终端图像处理方法及系统。The present invention relates to the technical field of image processing, and in particular, to a method and system for image processing of a mobile intelligent terminal.

背景技术Background technique

随着智能终端软硬件技术的不断发展成熟,各式各样的移动智能终端相继涌现,并且被广大用户所接受,人们在闲暇之时可以用移动智能终端进行视频聊天或者看视频和直播等,极大地丰富了人们的业余生活,但由于移动智能终端传输的图像存在失真的问题,因此需要对移动智能终端上的显示图像进行处理以保证清晰度,现有的图像处理方法单纯地对显示图像进行增强,并没有考虑到用户的实际观看情况从而导致部分用户对于显示图像的观看体验极差,降低了用户的体验感。With the continuous development and maturity of intelligent terminal software and hardware technology, various mobile intelligent terminals have emerged one after another, and have been accepted by the majority of users. People can use mobile intelligent terminals to conduct video chats or watch videos and live broadcasts in their spare time. It greatly enriches people's leisure life, but due to the distortion of the image transmitted by the mobile intelligent terminal, it is necessary to process the displayed image on the mobile intelligent terminal to ensure the clarity. The enhancement is performed without considering the actual viewing situation of the user, resulting in extremely poor viewing experience for some users of the displayed image, which reduces the user's sense of experience.

发明内容SUMMARY OF THE INVENTION

针对上述所显示出来的问题,本发明提供了一种移动智能终端图像处理方法及系统用以解决背景技术中提到的现有的图像处理方法单纯地对显示图像进行增强,并没有考虑到用户的实际观看情况从而导致部分用户对于显示图像的观看体验极差,降低了用户的体验感的问题。In view of the above-mentioned problems, the present invention provides an image processing method and system for a mobile intelligent terminal to solve the problem that the existing image processing methods mentioned in the background art simply enhance the displayed image without considering the user Therefore, some users have a very poor viewing experience of the displayed image, which reduces the user's sense of experience.

一种移动智能终端图像处理方法,包括以下步骤:A mobile intelligent terminal image processing method, comprising the following steps:

将移动智能终端的当前显示图像分割为N个等面积的目标区域;Divide the current display image of the mobile intelligent terminal into N target areas of equal area;

确定每个目标区域内图像的当前亮度特征;Determine the current brightness characteristics of the image within each target area;

采集观看移动智能终端的目标人员的当前人脸图像,在所述当前人脸图像中提取出人眼特征点;collecting the current face image of the target person watching the mobile intelligent terminal, and extracting human eye feature points from the current face image;

基于每个区域的当前亮度特征和人眼特征点,对当前显示图像进行智能化调整和处理。Based on the current brightness characteristics and human eye feature points of each area, intelligently adjust and process the currently displayed image.

优选的,所述将移动智能终端的当前显示图像分割为N个等面积的目标区域,包括:Preferably, dividing the current display image of the mobile smart terminal into N target areas of equal area, including:

对所述当前显示图像进行点云数据检测,获取检测结果;Perform point cloud data detection on the currently displayed image to obtain a detection result;

根据所述检测结果确定当前显示图像中的点云数据分布;Determine the point cloud data distribution in the currently displayed image according to the detection result;

根据所述点云数据分布按照预设规则确定当前显示图像的分割形式和N个分割区域数量;Determine the segmentation form of the currently displayed image and the number of N segmentation regions according to the point cloud data distribution according to preset rules;

根据所述分割形式将当前显示图像分割为N个等面积的目标区域。The currently displayed image is divided into N target areas of equal area according to the division form.

优选的,所述确定每个目标区域内图像的当前亮度特征,包括:Preferably, the determining the current brightness feature of the image in each target area includes:

提取每个目标区域内图像的每个像素的特征因子;Extract the feature factor of each pixel of the image in each target area;

根据每个目标区域内图像的每个像素的特征因子构建该目标区域的图像参数矩阵;Construct the image parameter matrix of the target area according to the feature factor of each pixel of the image in each target area;

根据所述图像参数矩阵中的每个矩阵因子参数确定每个像素对应的亮度参数;Determine the brightness parameter corresponding to each pixel according to each matrix factor parameter in the image parameter matrix;

根据每个像素对应的亮度参数在预设数据库中进行匹配获得该像素对应的亮度特征值;Perform matching in the preset database according to the brightness parameter corresponding to each pixel to obtain the brightness feature value corresponding to the pixel;

计算每个目标区域的平均亮度特征值,根据所述平均亮度特征值与标准亮度特征值的比例确定该目标区域的当前亮度特征。Calculate the average luminance characteristic value of each target area, and determine the current luminance characteristic of the target area according to the ratio of the average luminance characteristic value to the standard luminance characteristic value.

优选的,所述采集观看移动智能终端的目标人员的当前人脸图像,在所述当前人脸图像中提取出人眼特征点,包括:Preferably, the current face image of the target person watching the mobile intelligent terminal is collected, and the feature points of human eyes are extracted from the current face image, including:

提取出当前人脸图像中目标人员人眼所在区域;Extract the area where the target person's eyes are located in the current face image;

根据当前图像的像素比例确定出目标人员人眼所在区域的目标偏差边界;Determine the target deviation boundary of the area where the target person's eyes are located according to the pixel ratio of the current image;

根据所述目标偏差边界对所述目标人员人眼所在区域进行调整,获得调整后的人眼区域;Adjusting the area where the target person's eyes are located according to the target deviation boundary to obtain the adjusted human eye area;

根据预设人眼特征参数在所述调整后的人眼区域中提取出目标人员的人眼特征点。The human eye feature points of the target person are extracted from the adjusted human eye region according to preset human eye feature parameters.

优选的,所述基于每个区域的当前亮度特征和人眼特征点,对当前显示图像进行智能化调整和处理,包括:Preferably, intelligently adjusting and processing the currently displayed image based on the current brightness features and human eye feature points of each region, including:

根据所述人眼特征点确定目标用户的人眼注视区域;Determine the eye gaze area of the target user according to the human eye feature points;

确定目标用户的当前视力指数,根据所述当前视力指数和每个区域的当前亮度特征对所述当前显示图像进行亮度调节;Determine the current visual acuity index of the target user, and perform brightness adjustment on the currently displayed image according to the current visual acuity index and the current brightness feature of each area;

基于所述目标用户的当前视力指数,评估出每个区域的当前亮度特征对目标用户的显示清晰度,获取评估结果;Based on the current visual acuity index of the target user, evaluate the display clarity of the current brightness feature of each area to the target user, and obtain the evaluation result;

根据所述评估结果智能调节当前显示图像的显示比例/分辨率或者对当前显示图像进行增强处理。According to the evaluation result, the display scale/resolution of the currently displayed image is intelligently adjusted or the currently displayed image is enhanced.

优选的,所述根据当前图像的像素比例确定出目标人员人眼所在区域的目标偏差边界,包括:Preferably, determining the target deviation boundary of the area where the target person's eyes are located according to the pixel ratio of the current image includes:

根据当前图像的像素比例确定出目标人员人眼所在区域及其周围的掩码数据;Determine the mask data of the area where the target person's eyes are located and its surroundings according to the pixel ratio of the current image;

修改目标人员人眼所在区域的第一掩码数据,获取目标人员人眼所在区域周围的第二掩码数据变化参数;Modify the first mask data of the area where the target person's eyes are located, and obtain the second mask data change parameters around the area where the target person's eyes are located;

根据所述变化参数构建目标人员人眼所在区域参数变化带动其周围参数变化的代价函数;According to the change parameter, construct a cost function in which the parameter change of the target person's eye area drives the change of the surrounding parameters;

筛选出目标人员人眼所在区域周围的第二掩码数据变化参数在预设范围之外的目标第二掩码数据,根据目标第二掩码数据对应的区间确定第一偏差边界;Screening out the target second mask data whose change parameter of the second mask data around the area where the target person's eyes are located is outside the preset range, and determining the first deviation boundary according to the interval corresponding to the target second mask data;

根据所述代价函数计算出目标人员人眼所在区域周围的第二掩码数据随目标人员人眼所在区域的第一掩码数据变化的变化误差;Calculate the variation error of the second mask data around the area where the target person's eyes are located with the change of the first mask data in the area where the target person's eyes are located according to the cost function;

根据所述变化误差对所述第一偏差边界进行纠正,获得第二偏差边界;Correcting the first deviation boundary according to the variation error to obtain a second deviation boundary;

将所述第二偏差边界确认为目标偏差边界。The second deviation boundary is identified as the target deviation boundary.

优选的,在基于每个区域的当前亮度特征和人眼特征点,对当前显示图像进行智能化调整和处理之前,所述方法还包括:Preferably, before intelligently adjusting and processing the currently displayed image based on the current brightness feature and human eye feature points of each region, the method further includes:

根据当前显示图像的每个像素的特征因子确定当前显示图像内像素点的空间聚集特征;Determine the spatial aggregation feature of the pixels in the currently displayed image according to the feature factor of each pixel of the currently displayed image;

根据所述空间聚集特征确定当前显示图像内像素点的聚集特点;Determine the aggregation characteristics of the pixels in the currently displayed image according to the spatial aggregation characteristics;

基于所述聚集特点,确定当前显示图像中低维像素表征分布;Based on the aggregation characteristics, determine the low-dimensional pixel representation distribution in the currently displayed image;

根据所述低维像素表征分布提取出当前显示图像中每个低维像素的深层特征;extracting the deep feature of each low-dimensional pixel in the currently displayed image according to the low-dimensional pixel representation distribution;

将每个低维像素的深层特征作为当前显示图像进行智能化调整和处理的待处理特征。The deep feature of each low-dimensional pixel is used as the feature to be processed for intelligent adjustment and processing of the currently displayed image.

优选的,所述根据当前图像的像素比例确定出目标人员人眼所在区域及其周围的掩码数据,包括:Preferably, the mask data of the area where the target person's eyes are located and its surroundings are determined according to the pixel ratio of the current image, including:

根据所述当前图像的像素比例确定拍摄所述当前图像的设备的配置信息;Determine the configuration information of the device that captures the current image according to the pixel ratio of the current image;

根据所述配置信息生成设备拍摄图像的掩码矩阵;generating a mask matrix of the image captured by the device according to the configuration information;

根据所述掩码矩阵结合当前图像每个像素点的像素值确定每个像素的掩码向量;Determine the mask vector of each pixel according to the mask matrix in combination with the pixel value of each pixel of the current image;

根据每个像素的掩码向量的向量特征对每个像素的掩码向量进行分组,获取分组结果;Group the mask vector of each pixel according to the vector feature of the mask vector of each pixel, and obtain the grouping result;

确定每个分组结果中的分组像素聚集情况,获取聚集分组像素外围的第一像素和其周围的第二像素;Determine the grouping pixel aggregation situation in each grouping result, and obtain the first pixel and the second pixel around the grouping pixel periphery;

计算每个第一像素和第二像素之间的相位相干性;calculating the phase coherence between each first pixel and the second pixel;

将第一像素和第二像素之间的相位相干性计算结果小于预设阈值的目标第二像素进行标记;marking the target second pixel whose phase coherence calculation result between the first pixel and the second pixel is less than a preset threshold;

根据目标第二像素的标记情况确定当前图像中的像素掩码位;Determine the pixel mask bit in the current image according to the marking situation of the target second pixel;

获取人眼像素特征,根据所述人眼像素特征在当前图像的聚集分组像素中进行匹配以规划出目标人员人眼所在区域;Obtaining human eye pixel features, and performing matching in the aggregated grouped pixels of the current image according to the human eye pixel features to plan the area where the target person's human eyes are located;

对当前图像中的像素掩码位进行像素解析以获得目标人员人眼所在区域周围的掩码数据。Perform pixel parsing on the pixel mask bits in the current image to obtain mask data around the area where the target person's eyes are located.

一种移动智能终端图像处理系统,该系统包括:A mobile intelligent terminal image processing system, the system includes:

分割模块,用于将移动智能终端的当前显示图像分割为N个等面积的目标区域;The segmentation module is used to segment the current display image of the mobile intelligent terminal into N target areas of equal area;

确定模块,用于确定每个目标区域内图像的当前亮度特征;A determination module for determining the current brightness feature of the image in each target area;

提取模块,用于采集观看移动智能终端的目标人员的当前人脸图像,在所述当前人脸图像中提取出人眼特征点;an extraction module, used for collecting the current face image of the target person watching the mobile intelligent terminal, and extracting human eye feature points from the current face image;

处理模块,用于基于每个区域的当前亮度特征和人眼特征点,对当前显示图像进行智能化调整和处理。The processing module is used to intelligently adjust and process the currently displayed image based on the current brightness feature and human eye feature points of each area.

本发明的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本发明而了解。本发明的目的和其他优点可通过在所写的说明书以及附图中所特别指出的结构来实现和获得。Other features and advantages of the present invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.

下面通过附图和实施例,对本发明的技术方案做进一步的详细描述。The technical solutions of the present invention will be further described in detail below through the accompanying drawings and embodiments.

附图说明Description of drawings

附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明,并不构成对本发明的限制。The accompanying drawings are used to provide a further understanding of the present invention, and constitute a part of the specification, and are used to explain the present invention together with the embodiments of the present invention, and do not constitute a limitation to the present invention.

图1为本发明所提供的一种移动智能终端图像处理方法的工作流程图;Fig. 1 is the working flow chart of a kind of mobile intelligent terminal image processing method provided by the present invention;

图2为本发明所提供的一种移动智能终端图像处理方法的另一工作流程图;Fig. 2 is another work flow chart of a kind of mobile intelligent terminal image processing method provided by the present invention;

图3为本发明所提供的一种移动智能终端图像处理方法的又一工作流程图;Fig. 3 is another working flow chart of a method for processing an image of a mobile intelligent terminal provided by the present invention;

图4为本发明所提供的一种移动智能终端图像处理系统的结构示意图。FIG. 4 is a schematic structural diagram of a mobile intelligent terminal image processing system provided by the present invention.

具体实施方式Detailed ways

这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。Exemplary embodiments will be described in detail herein, examples of which are illustrated in the accompanying drawings. Where the following description refers to the drawings, the same numerals in different drawings refer to the same or similar elements unless otherwise indicated. The implementations described in the illustrative examples below are not intended to represent all implementations consistent with this disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as recited in the appended claims.

随着智能终端软硬件技术的不断发展成熟,各式各样的移动智能终端相继涌现,并且被广大用户所接受,人们在闲暇之时可以用移动智能终端进行视频聊天或者看视频和直播等,极大地丰富了人们的业余生活,但由于移动智能终端传输的图像存在失真的问题,因此需要对移动智能终端上的显示图像进行处理以保证清晰度,现有的图像处理方法单纯地对显示图像进行增强,并没有考虑到用户的实际观看情况从而导致部分用户对于显示图像的观看体验极差,降低了用户的体验感。为了解决上述问题,本实施例公开了一种移动智能终端图像处理方法。With the continuous development and maturity of intelligent terminal software and hardware technology, various mobile intelligent terminals have emerged one after another, and have been accepted by the majority of users. People can use mobile intelligent terminals to conduct video chats or watch videos and live broadcasts in their spare time. It greatly enriches people's leisure life, but due to the distortion of the image transmitted by the mobile intelligent terminal, it is necessary to process the displayed image on the mobile intelligent terminal to ensure the clarity. The enhancement is performed without considering the actual viewing situation of the user, resulting in extremely poor viewing experience of the displayed image for some users, which reduces the user's sense of experience. In order to solve the above problem, this embodiment discloses an image processing method for a mobile intelligent terminal.

一种移动智能终端图像处理方法,如图1所示,包括以下步骤:A mobile intelligent terminal image processing method, as shown in Figure 1, includes the following steps:

步骤S101、将移动智能终端的当前显示图像分割为N个等面积的目标区域;Step S101, dividing the current display image of the mobile intelligent terminal into N target areas of equal area;

步骤S102、确定每个目标区域内图像的当前亮度特征;Step S102, determining the current brightness feature of the image in each target area;

步骤S103、采集观看移动智能终端的目标人员的当前人脸图像,在所述当前人脸图像中提取出人眼特征点;Step S103, collecting the current face image of the target person watching the mobile intelligent terminal, and extracting human eye feature points from the current face image;

步骤S104、基于每个区域的当前亮度特征和人眼特征点,对当前显示图像进行智能化调整和处理。Step S104 , intelligently adjust and process the currently displayed image based on the current brightness feature and the human eye feature point of each region.

上述技术方案的工作原理为:将移动智能终端的当前显示图像分割为N个等面积的目标区域,确定每个目标区域内图像的当前亮度特征,采集观看移动智能终端的目标人员的当前人脸图像,在所述当前人脸图像中提取出人眼特征点,基于每个区域的当前亮度特征和人眼特征点,对当前显示图像进行智能化调整和处理。The working principle of the above technical solution is as follows: dividing the current displayed image of the mobile intelligent terminal into N target areas of equal area, determining the current brightness feature of the image in each target area, and collecting the current face of the target person watching the mobile intelligent terminal. image, extracting human eye feature points from the current face image, and intelligently adjusting and processing the currently displayed image based on the current brightness features and human eye feature points of each area.

上述技术方案的有益效果为:通过根据移动智能终端的当前显示图像的亮度特征结合移动智能终端面前目标人员的人眼特征点来智能地对当前显示图像的分辨率、亮度等进行调节以及增强处理可以根据观看者的实际情况来智能化地进行处理工作,保证了用户的观看体验,提高了实用性的同时也提高了用户的体验感,解决了现有技术中单纯地对显示图像进行增强,并没有考虑到用户的实际观看情况从而导致部分用户对于显示图像的观看体验极差,降低了用户的体验感的问题。The beneficial effects of the above technical solutions are: intelligently adjust and enhance the resolution, brightness, etc. of the currently displayed image according to the brightness feature of the currently displayed image of the mobile intelligent terminal combined with the human eye feature points of the target person in front of the mobile intelligent terminal The processing work can be intelligently performed according to the actual situation of the viewer, which ensures the user's viewing experience, improves the practicability and also improves the user's sense of experience, and solves the problem of simply enhancing the displayed image in the prior art. The actual viewing situation of the user is not considered, so that some users have extremely poor viewing experience of the displayed image, which reduces the user's sense of experience.

在一个实施例中,如图2所示,所述将移动智能终端的当前显示图像分割为N个等面积的目标区域,包括:In one embodiment, as shown in FIG. 2 , dividing the current display image of the mobile smart terminal into N target areas of equal area, including:

步骤S201、对所述当前显示图像进行点云数据检测,获取检测结果;Step S201, performing point cloud data detection on the currently displayed image, and obtaining a detection result;

步骤S202、根据所述检测结果确定当前显示图像中的点云数据分布;Step S202, determining the point cloud data distribution in the currently displayed image according to the detection result;

步骤S203、根据所述点云数据分布按照预设规则确定当前显示图像的分割形式和N个分割区域数量;Step S203, determining the segmentation form of the currently displayed image and the number of N segmentation regions according to the point cloud data distribution according to preset rules;

步骤S204、根据所述分割形式将当前显示图像分割为N个等面积的目标区域。Step S204: Divide the currently displayed image into N target regions of equal area according to the division form.

上述技术方案的有益效果为:通过根据点云数据分布来确定当前显示图像的切割方式和切割区域数量可以保证每个切割区域内的点云数据分布的均匀,从而为后续进行图像处理奠定了基础,保证了处理样本的一致性和客观性。The beneficial effects of the above technical solutions are: by determining the cutting method and the number of cutting areas of the currently displayed image according to the distribution of point cloud data, the uniform distribution of point cloud data in each cutting area can be ensured, thereby laying a foundation for subsequent image processing. , to ensure the consistency and objectivity of processing samples.

在一个实施例中,所述确定每个目标区域内图像的当前亮度特征,包括:In one embodiment, the determining the current brightness feature of the image in each target area includes:

提取每个目标区域内图像的每个像素的特征因子;Extract the feature factor of each pixel of the image in each target area;

根据每个目标区域内图像的每个像素的特征因子构建该目标区域的图像参数矩阵;Construct the image parameter matrix of the target area according to the feature factor of each pixel of the image in each target area;

根据所述图像参数矩阵中的每个矩阵因子参数确定每个像素对应的亮度参数;Determine the brightness parameter corresponding to each pixel according to each matrix factor parameter in the image parameter matrix;

根据每个像素对应的亮度参数在预设数据库中进行匹配获得该像素对应的亮度特征值;Perform matching in the preset database according to the brightness parameter corresponding to each pixel to obtain the brightness feature value corresponding to the pixel;

计算每个目标区域的平均亮度特征值,根据所述平均亮度特征值与标准亮度特征值的比例确定该目标区域的当前亮度特征。Calculate the average luminance characteristic value of each target area, and determine the current luminance characteristic of the target area according to the ratio of the average luminance characteristic value to the standard luminance characteristic value.

上述技术方案的有益效果为:通过利用参数矩阵获得每个像素的亮度特征值进而确定每个目标区域的平均亮度特征值可以将每个目标区域内每个像素考虑在内来综合地确定每个目标区域的当前亮度特征,保证了最终结果的合理性和客观性。The beneficial effects of the above technical solutions are: by using the parameter matrix to obtain the brightness characteristic value of each pixel and then determining the average brightness characteristic value of each target area, each pixel in each target area can be taken into account to comprehensively determine each pixel. The current brightness characteristics of the target area ensure the rationality and objectivity of the final result.

在一个实施例中,如图3所示,所述采集观看移动智能终端的目标人员的当前人脸图像,在所述当前人脸图像中提取出人眼特征点,包括:In one embodiment, as shown in FIG. 3 , the current face image of the target person watching the mobile smart terminal is collected, and the feature points of human eyes are extracted from the current face image, including:

步骤S301、提取出当前人脸图像中目标人员人眼所在区域;Step S301, extracting the area where the target person's eyes are located in the current face image;

步骤S302、根据当前图像的像素比例确定出目标人员人眼所在区域的目标偏差边界;Step S302, determining the target deviation boundary of the area where the target person's eyes are located according to the pixel ratio of the current image;

步骤S303、根据所述目标偏差边界对所述目标人员人眼所在区域进行调整,获得调整后的人眼区域;Step S303, adjusting the area where the target person's human eye is located according to the target deviation boundary to obtain an adjusted human eye area;

步骤S304、根据预设人眼特征参数在所述调整后的人眼区域中提取出目标人员的人眼特征点。Step S304 , extracting the human eye feature points of the target person from the adjusted human eye region according to the preset human eye feature parameters.

上述技术方案的有益效果为:通过确定目标人员人眼所在区域的目标偏差边界可以有效地避免误差带来的影响从而可以更加准确地提取出目标人员的人眼特征点,提高了提取精度的同时也有效地避免了区域范围确定错误情况的发生,进一步地提高了实用性。The beneficial effects of the above technical solutions are: by determining the target deviation boundary of the area where the target person's eyes are located, the influence of the error can be effectively avoided, so that the human eye feature points of the target person can be extracted more accurately, and the extraction accuracy is improved at the same time. It also effectively avoids the occurrence of errors in the determination of the area range, and further improves the practicability.

在一个实施例中,所述基于每个区域的当前亮度特征和人眼特征点,对当前显示图像进行智能化调整和处理,包括:In one embodiment, the intelligently adjusting and processing the currently displayed image based on the current brightness feature and the human eye feature point of each region includes:

根据所述人眼特征点确定目标用户的人眼注视区域;Determine the eye gaze area of the target user according to the human eye feature points;

确定目标用户的当前视力指数,根据所述当前视力指数和每个区域的当前亮度特征对所述当前显示图像进行亮度调节;Determine the current vision index of the target user, and adjust the brightness of the currently displayed image according to the current vision index and the current brightness feature of each area;

基于所述目标用户的当前视力指数,评估出每个区域的当前亮度特征对目标用户的显示清晰度,获取评估结果;Based on the current visual acuity index of the target user, evaluate the display clarity of the current brightness feature of each area to the target user, and obtain the evaluation result;

根据所述评估结果智能调节当前显示图像的显示比例/分辨率或者对当前显示图像进行增强处理。According to the evaluation result, the display scale/resolution of the currently displayed image is intelligently adjusted or the currently displayed image is enhanced.

上述技术方案的有益效果为:通过根据当前显示图像的当前亮度特征结合目标人员的视力指数来选择性地对当前显示图像进行处理可以进一步地根据目标用户的实际需要来作出合理性的处理手段,进一步地提高了实用性。The beneficial effects of the above technical solutions are: by selectively processing the currently displayed image according to the current brightness feature of the currently displayed image combined with the vision index of the target person, a reasonable processing method can be further made according to the actual needs of the target user, The usability is further improved.

在一个实施例中,所述根据当前图像的像素比例确定出目标人员人眼所在区域的目标偏差边界,包括:In one embodiment, determining the target deviation boundary of the area where the target person's eyes are located according to the pixel ratio of the current image includes:

根据当前图像的像素比例确定出目标人员人眼所在区域及其周围的掩码数据;Determine the mask data of the area where the target person's eyes are located and its surroundings according to the pixel ratio of the current image;

修改目标人员人眼所在区域的第一掩码数据,获取目标人员人眼所在区域周围的第二掩码数据变化参数;Modify the first mask data of the area where the target person's eyes are located, and obtain the second mask data change parameters around the area where the target person's eyes are located;

根据所述变化参数构建目标人员人眼所在区域参数变化带动其周围参数变化的代价函数;According to the change parameter, construct a cost function in which the parameter change of the target person's eye area drives the change of the surrounding parameters;

筛选出目标人员人眼所在区域周围的第二掩码数据变化参数在预设范围之外的目标第二掩码数据,根据目标第二掩码数据对应的区间确定第一偏差边界;Screening out the target second mask data whose change parameter of the second mask data around the area where the target person's eyes are located is outside the preset range, and determining the first deviation boundary according to the interval corresponding to the target second mask data;

根据所述代价函数计算出目标人员人眼所在区域周围的第二掩码数据随目标人员人眼所在区域的第一掩码数据变化的变化误差;Calculate the variation error of the second mask data around the area where the target person's eyes are located with the change of the first mask data in the area where the target person's eyes are located according to the cost function;

根据所述变化误差对所述第一偏差边界进行纠正,获得第二偏差边界;Correcting the first deviation boundary according to the variation error to obtain a second deviation boundary;

将所述第二偏差边界确认为目标偏差边界。The second deviation boundary is identified as the target deviation boundary.

上述技术方案的有益效果为:通过构建代价函数来对目标人员人眼所在区域周围的第二掩码数据随目标人员人眼所在区域的第一掩码数据变化的第一偏差边界进行纠正可以进一步地避免误差,提高了精度的同时也避免了遗漏区域,提高了稳定性。The beneficial effects of the above technical solutions are: by constructing a cost function to correct the first deviation boundary of the second mask data around the area where the target person's eyes are located with the first mask data in the area where the target person's eyes are located, it can further To avoid errors and improve the accuracy, it also avoids missing areas and improves stability.

在一个实施例中,在基于每个区域的当前亮度特征和人眼特征点,对当前显示图像进行智能化调整和处理之前,所述方法还包括:In one embodiment, before intelligently adjusting and processing the currently displayed image based on the current brightness feature and human eye feature points of each region, the method further includes:

根据当前显示图像的每个像素的特征因子确定当前显示图像内像素点的空间聚集特征;Determine the spatial aggregation characteristics of the pixels in the currently displayed image according to the characteristic factor of each pixel of the currently displayed image;

根据所述空间聚集特征确定当前显示图像内像素点的聚集特点;Determine the aggregation characteristics of the pixels in the currently displayed image according to the spatial aggregation characteristics;

基于所述聚集特点,确定当前显示图像中低维像素表征分布;Based on the aggregation characteristics, determine the low-dimensional pixel representation distribution in the currently displayed image;

根据所述低维像素表征分布提取出当前显示图像中每个低维像素的深层特征;extracting the deep feature of each low-dimensional pixel in the currently displayed image according to the low-dimensional pixel representation distribution;

将每个低维像素的深层特征作为当前显示图像进行智能化调整和处理的待处理特征。The deep feature of each low-dimensional pixel is used as the feature to be processed for intelligent adjustment and processing of the currently displayed image.

上述技术方案的有益效果为:通过确定当前显示图像进行智能化调整和处理的待处理特征可以使得后续对当前显示图像进行处理时着重地处理待处理特征,提高了处理效率。The beneficial effects of the above technical solutions are: by determining the features to be processed for intelligent adjustment and processing of the currently displayed image, the features to be processed can be emphatically processed in subsequent processing of the currently displayed image, thereby improving the processing efficiency.

在一个实施例中,所述根据当前图像的像素比例确定出目标人员人眼所在区域及其周围的掩码数据,包括:In one embodiment, determining the mask data of the area where the target person's eyes are located and its surroundings according to the pixel ratio of the current image, including:

根据所述当前图像的像素比例确定拍摄所述当前图像的设备的配置信息;Determine the configuration information of the device that captures the current image according to the pixel ratio of the current image;

根据所述配置信息生成设备拍摄图像的掩码矩阵;generating a mask matrix of the image captured by the device according to the configuration information;

根据所述掩码矩阵结合当前图像每个像素点的像素值确定每个像素的掩码向量;Determine the mask vector of each pixel according to the mask matrix in combination with the pixel value of each pixel of the current image;

根据每个像素的掩码向量的向量特征对每个像素的掩码向量进行分组,获取分组结果;Group the mask vector of each pixel according to the vector feature of the mask vector of each pixel, and obtain the grouping result;

确定每个分组结果中的分组像素聚集情况,获取聚集分组像素外围的第一像素和其周围的第二像素;Determine the aggregation situation of grouped pixels in each grouping result, and obtain the first pixel and the second pixel around it in the periphery of the aggregated grouped pixels;

计算每个第一像素和第二像素之间的相位相干性;calculating the phase coherence between each first pixel and the second pixel;

将第一像素和第二像素之间的相位相干性计算结果小于预设阈值的目标第二像素进行标记;marking the target second pixel whose phase coherence calculation result between the first pixel and the second pixel is less than a preset threshold;

根据目标第二像素的标记情况确定当前图像中的像素掩码位;Determine the pixel mask bit in the current image according to the marking situation of the target second pixel;

获取人眼像素特征,根据所述人眼像素特征在当前图像的聚集分组像素中进行匹配以规划出目标人员人眼所在区域;Obtaining human eye pixel features, and performing matching in the aggregated grouped pixels of the current image according to the human eye pixel features to plan the area where the target person's human eyes are located;

对当前图像中的像素掩码位进行像素解析以获得目标人员人眼所在区域周围的掩码数据。Perform pixel parsing on the pixel mask bits in the current image to obtain mask data around the area where the target person's eyes are located.

上述技术方案的有益效果为:通过根据当前图像的像素聚集情况来确定目标人员人眼所在区域可以直观地根据像素聚集情况确定当前图像中的人脸所在区域进而精确地确定目标人员人眼所在区域,提高了检测结果的准确性,进一步地,通过确定当前图像中的像素掩码位可以根据像素分布参数来确定当前图像中的无关掩码区域,为后续获取掩码数据提供了有效的参考样本。The beneficial effects of the above technical solutions are: by determining the area where the target person's eyes are located according to the pixel aggregation situation of the current image, the area where the human face of the current image is located can be intuitively determined according to the pixel aggregation situation, and then the area where the target person's eyes are located can be accurately determined. , the accuracy of the detection result is improved. Further, by determining the pixel mask bits in the current image, the irrelevant mask area in the current image can be determined according to the pixel distribution parameters, which provides an effective reference sample for subsequent acquisition of mask data. .

本实施例还公开了一种移动智能终端图像处理系统,如图4所示,该系统包括:This embodiment also discloses an image processing system for a mobile intelligent terminal, as shown in FIG. 4 , the system includes:

分割模块401,用于将移动智能终端的当前显示图像分割为N个等面积的目标区域;The segmentation module 401 is used to segment the current display image of the mobile intelligent terminal into N target areas of equal area;

确定模块402,用于确定每个目标区域内图像的当前亮度特征;A determination module 402, configured to determine the current brightness feature of the image in each target area;

提取模块403,用于采集观看移动智能终端的目标人员的当前人脸图像,在所述当前人脸图像中提取出人眼特征点;The extraction module 403 is used to collect the current face image of the target person watching the mobile intelligent terminal, and extract the human eye feature points from the current face image;

处理模块404,用于基于每个区域的当前亮度特征和人眼特征点,对当前显示图像进行智能化调整和处理。The processing module 404 is configured to intelligently adjust and process the currently displayed image based on the current brightness feature and human eye feature points of each region.

上述技术方案的工作原理及有益效果在方法权利要求中已经说明,此处不再赘述。The working principle and beneficial effects of the above technical solutions have been described in the method claims, and will not be repeated here.

本领域技术用户员在考虑说明书及实践这里公开的公开后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。Other embodiments of the present disclosure will readily occur to those skilled in the art upon consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the present disclosure that follow the general principles of the present disclosure and include common knowledge or techniques in the technical field not disclosed by the present disclosure . The specification and examples are to be regarded as exemplary only, with the true scope and spirit of the disclosure being indicated by the following claims.

应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。It is to be understood that the present disclosure is not limited to the precise structures described above and illustrated in the accompanying drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (9)

1.一种移动智能终端图像处理方法,其特征在于,包括以下步骤:1. a mobile intelligent terminal image processing method, is characterized in that, comprises the following steps: 将移动智能终端的当前显示图像分割为N个等面积的目标区域;Divide the current display image of the mobile intelligent terminal into N target areas of equal area; 确定每个目标区域内图像的当前亮度特征;Determine the current brightness characteristics of the image within each target area; 采集观看移动智能终端的目标人员的当前人脸图像,在所述当前人脸图像中提取出人眼特征点;collecting the current face image of the target person watching the mobile intelligent terminal, and extracting human eye feature points from the current face image; 基于每个区域的当前亮度特征和人眼特征点,对当前显示图像进行智能化调整和处理。Based on the current brightness characteristics and human eye feature points of each area, intelligently adjust and process the currently displayed image. 2.根据权利要求1所述移动智能终端图像处理方法,其特征在于,所述将移动智能终端的当前显示图像分割为N个等面积的目标区域,包括:2. The image processing method of a mobile intelligent terminal according to claim 1, wherein the dividing the current display image of the mobile intelligent terminal into N target areas of equal area, comprising: 对所述当前显示图像进行点云数据检测,获取检测结果;Perform point cloud data detection on the currently displayed image to obtain a detection result; 根据所述检测结果确定当前显示图像中的点云数据分布;Determine the point cloud data distribution in the currently displayed image according to the detection result; 根据所述点云数据分布按照预设规则确定当前显示图像的分割形式和N个分割区域数量;Determine the segmentation form of the currently displayed image and the number of N segmentation regions according to the point cloud data distribution according to preset rules; 根据所述分割形式将当前显示图像分割为N个等面积的目标区域。The currently displayed image is divided into N target areas of equal area according to the division form. 3.根据权利要求1所述移动智能终端图像处理方法,其特征在于,所述确定每个目标区域内图像的当前亮度特征,包括:3. The image processing method of a mobile intelligent terminal according to claim 1, wherein the determining the current brightness feature of the image in each target area comprises: 提取每个目标区域内图像的每个像素的特征因子;Extract the feature factor of each pixel of the image in each target area; 根据每个目标区域内图像的每个像素的特征因子构建该目标区域的图像参数矩阵;Construct the image parameter matrix of the target area according to the characteristic factor of each pixel of the image in each target area; 根据所述图像参数矩阵中的每个矩阵因子参数确定每个像素对应的亮度参数;Determine the brightness parameter corresponding to each pixel according to each matrix factor parameter in the image parameter matrix; 根据每个像素对应的亮度参数在预设数据库中进行匹配获得该像素对应的亮度特征值;Perform matching in the preset database according to the brightness parameter corresponding to each pixel to obtain the brightness feature value corresponding to the pixel; 计算每个目标区域的平均亮度特征值,根据所述平均亮度特征值与标准亮度特征值的比例确定该目标区域的当前亮度特征。Calculate the average luminance characteristic value of each target area, and determine the current luminance characteristic of the target area according to the ratio of the average luminance characteristic value to the standard luminance characteristic value. 4.根据权利要求1所述移动智能终端图像处理方法,其特征在于,所述采集观看移动智能终端的目标人员的当前人脸图像,在所述当前人脸图像中提取出人眼特征点,包括:4. The method for processing an image of a mobile intelligent terminal according to claim 1, wherein the current face image of the target person watching the mobile intelligent terminal is collected, and human eye feature points are extracted from the current face image, include: 提取出当前人脸图像中目标人员人眼所在区域;Extract the area where the target person's eyes are located in the current face image; 根据当前图像的像素比例确定出目标人员人眼所在区域的目标偏差边界;Determine the target deviation boundary of the area where the target person's eyes are located according to the pixel ratio of the current image; 根据所述目标偏差边界对所述目标人员人眼所在区域进行调整,获得调整后的人眼区域;Adjusting the area where the target person's eyes are located according to the target deviation boundary to obtain the adjusted human eye area; 根据预设人眼特征参数在所述调整后的人眼区域中提取出目标人员的人眼特征点。The human eye feature points of the target person are extracted from the adjusted human eye region according to preset human eye feature parameters. 5.根据权利要求1所述移动智能终端图像处理方法,其特征在于,所述基于每个区域的当前亮度特征和人眼特征点,对当前显示图像进行智能化调整和处理,包括:5. The image processing method of an intelligent mobile terminal according to claim 1, wherein the current display image is intelligently adjusted and processed based on the current brightness feature and the human eye feature point of each region, comprising: 根据所述人眼特征点确定目标用户的人眼注视区域;Determine the eye gaze area of the target user according to the human eye feature points; 确定目标用户的当前视力指数,根据所述当前视力指数和每个区域的当前亮度特征对所述当前显示图像进行亮度调节;Determine the current visual acuity index of the target user, and perform brightness adjustment on the currently displayed image according to the current visual acuity index and the current brightness feature of each area; 基于所述目标用户的当前视力指数,评估出每个区域的当前亮度特征对目标用户的显示清晰度,获取评估结果;Based on the current visual acuity index of the target user, evaluate the display clarity of the current brightness feature of each area to the target user, and obtain the evaluation result; 根据所述评估结果智能调节当前显示图像的显示比例/分辨率或者对当前显示图像进行增强处理。According to the evaluation result, the display scale/resolution of the currently displayed image is intelligently adjusted or the currently displayed image is enhanced. 6.根据权利要求4所述移动智能终端图像处理方法,其特征在于,所述根据当前图像的像素比例确定出目标人员人眼所在区域的目标偏差边界,包括:6. The image processing method for an intelligent mobile terminal according to claim 4, wherein the determining the target deviation boundary of the area where the target person's eyes are located according to the pixel ratio of the current image, comprises: 根据当前图像的像素比例确定出目标人员人眼所在区域及其周围的掩码数据;Determine the mask data of the area where the target person's eyes are located and its surroundings according to the pixel ratio of the current image; 修改目标人员人眼所在区域的第一掩码数据,获取目标人员人眼所在区域周围的第二掩码数据变化参数;Modify the first mask data of the area where the target person's eyes are located, and obtain the second mask data change parameters around the area where the target person's eyes are located; 根据所述变化参数构建目标人员人眼所在区域参数变化带动其周围参数变化的代价函数;Constructing a cost function in which the parameter change of the area where the target person's eyes are located drives the change of the surrounding parameters according to the change parameter; 筛选出目标人员人眼所在区域周围的第二掩码数据变化参数在预设范围之外的目标第二掩码数据,根据目标第二掩码数据对应的区间确定第一偏差边界;Screening out the target second mask data whose change parameter of the second mask data around the area where the target person's eyes are located is outside the preset range, and determining the first deviation boundary according to the interval corresponding to the target second mask data; 根据所述代价函数计算出目标人员人眼所在区域周围的第二掩码数据随目标人员人眼所在区域的第一掩码数据变化的变化误差;Calculate the variation error of the second mask data around the area where the target person's eyes are located with the change of the first mask data in the area where the target person's eyes are located according to the cost function; 根据所述变化误差对所述第一偏差边界进行纠正,获得第二偏差边界;Correcting the first deviation boundary according to the variation error to obtain a second deviation boundary; 将所述第二偏差边界确认为目标偏差边界。The second deviation boundary is identified as the target deviation boundary. 7.根据权利要求3所述移动智能终端图像处理方法,其特征在于,在基于每个区域的当前亮度特征和人眼特征点,对当前显示图像进行智能化调整和处理之前,所述方法还包括:7. The image processing method of an intelligent mobile terminal according to claim 3, characterized in that, before the current displayed image is intelligently adjusted and processed based on the current brightness feature of each region and the human eye feature points, the method further comprises: include: 根据当前显示图像的每个像素的特征因子确定当前显示图像内像素点的空间聚集特征;Determine the spatial aggregation feature of the pixels in the currently displayed image according to the feature factor of each pixel of the currently displayed image; 根据所述空间聚集特征确定当前显示图像内像素点的聚集特点;Determine the aggregation characteristics of the pixels in the currently displayed image according to the spatial aggregation characteristics; 基于所述聚集特点,确定当前显示图像中低维像素表征分布;Based on the aggregation characteristics, determine the low-dimensional pixel representation distribution in the currently displayed image; 根据所述低维像素表征分布提取出当前显示图像中每个低维像素的深层特征;extracting the deep feature of each low-dimensional pixel in the currently displayed image according to the low-dimensional pixel representation distribution; 将每个低维像素的深层特征作为当前显示图像进行智能化调整和处理的待处理特征。The deep feature of each low-dimensional pixel is used as the feature to be processed for intelligent adjustment and processing of the currently displayed image. 8.根据权利要求6所述移动智能终端图像处理方法,其特征在于,所述根据当前图像的像素比例确定出目标人员人眼所在区域及其周围的掩码数据,包括:8. The image processing method of a mobile intelligent terminal according to claim 6, wherein the mask data of the area where the target person's eyes are located and its surroundings are determined according to the pixel ratio of the current image, comprising: 根据所述当前图像的像素比例确定拍摄所述当前图像的设备的配置信息;Determine the configuration information of the device that captures the current image according to the pixel ratio of the current image; 根据所述配置信息生成设备拍摄图像的掩码矩阵;generating a mask matrix of the image captured by the device according to the configuration information; 根据所述掩码矩阵结合当前图像每个像素点的像素值确定每个像素的掩码向量;Determine the mask vector of each pixel according to the mask matrix in combination with the pixel value of each pixel of the current image; 根据每个像素的掩码向量的向量特征对每个像素的掩码向量进行分组,获取分组结果;Group the mask vector of each pixel according to the vector feature of the mask vector of each pixel, and obtain the grouping result; 确定每个分组结果中的分组像素聚集情况,获取聚集分组像素外围的第一像素和其周围的第二像素;Determine the grouping pixel aggregation situation in each grouping result, and obtain the first pixel and the second pixel around the grouping pixel periphery; 计算每个第一像素和第二像素之间的相位相干性;calculating the phase coherence between each first pixel and the second pixel; 将第一像素和第二像素之间的相位相干性计算结果小于预设阈值的目标第二像素进行标记;marking the target second pixel whose phase coherence calculation result between the first pixel and the second pixel is less than a preset threshold; 根据目标第二像素的标记情况确定当前图像中的像素掩码位;Determine the pixel mask bit in the current image according to the marking situation of the target second pixel; 获取人眼像素特征,根据所述人眼像素特征在当前图像的聚集分组像素中进行匹配以规划出目标人员人眼所在区域;Obtaining human eye pixel features, and performing matching in the aggregated grouped pixels of the current image according to the human eye pixel features to plan the area where the target person's human eyes are located; 对当前图像中的像素掩码位进行像素解析以获得目标人员人眼所在区域周围的掩码数据。Perform pixel parsing on the pixel mask bits in the current image to obtain mask data around the area where the target person's eyes are located. 9.一种移动智能终端图像处理系统,其特征在于,该系统包括:9. A mobile intelligent terminal image processing system, characterized in that the system comprises: 分割模块,用于将移动智能终端的当前显示图像分割为N个等面积的目标区域;The segmentation module is used to segment the current display image of the mobile intelligent terminal into N target areas of equal area; 确定模块,用于确定每个目标区域内图像的当前亮度特征;A determination module for determining the current brightness feature of the image in each target area; 提取模块,用于采集观看移动智能终端的目标人员的当前人脸图像,在所述当前人脸图像中提取出人眼特征点;an extraction module, used for collecting the current face image of the target person watching the mobile intelligent terminal, and extracting human eye feature points from the current face image; 处理模块,用于基于每个区域的当前亮度特征和人眼特征点,对当前显示图像进行智能化调整和处理。The processing module is used to intelligently adjust and process the currently displayed image based on the current brightness feature and human eye feature points of each area.
CN202210312454.7A 2022-03-28 2022-03-28 Mobile intelligent terminal image processing method and system Pending CN114693553A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210312454.7A CN114693553A (en) 2022-03-28 2022-03-28 Mobile intelligent terminal image processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210312454.7A CN114693553A (en) 2022-03-28 2022-03-28 Mobile intelligent terminal image processing method and system

Publications (1)

Publication Number Publication Date
CN114693553A true CN114693553A (en) 2022-07-01

Family

ID=82140804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210312454.7A Pending CN114693553A (en) 2022-03-28 2022-03-28 Mobile intelligent terminal image processing method and system

Country Status (1)

Country Link
CN (1) CN114693553A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690892A (en) * 2023-01-03 2023-02-03 京东方艺云(杭州)科技有限公司 Squinting recognition method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690892A (en) * 2023-01-03 2023-02-03 京东方艺云(杭州)科技有限公司 Squinting recognition method and device, electronic equipment and storage medium
CN115690892B (en) * 2023-01-03 2023-06-13 京东方艺云(杭州)科技有限公司 A squint recognition method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN105744256B (en) Based on the significant objective evaluation method for quality of stereo images of collection of illustrative plates vision
Tian et al. A multi-order derivative feature-based quality assessment model for light field image
CN102867295B (en) A kind of color correction method for color image
CN112017222A (en) Video panorama stitching and three-dimensional fusion method and device
CN110298829A (en) A kind of lingual diagnosis method, apparatus, system, computer equipment and storage medium
CN105357519B (en) Quality objective evaluation method for three-dimensional image without reference based on self-similarity characteristic
CN115965889A (en) A video quality assessment data processing method, device and equipment
CN104700405B (en) A kind of foreground detection method and system
Tu et al. V-PCC projection based blind point cloud quality assessment for compression distortion
CN111145135A (en) Image descrambling processing method, device, equipment and storage medium
CN113327234A (en) Video redirection quality evaluation method based on space-time saliency classification and fusion
CN104361583B (en) A kind of method determining asymmetric distortion three-dimensional image objective quality
CN108447059A (en) It is a kind of to refer to light field image quality evaluating method entirely
Yang et al. EHNQ: Subjective and objective quality evaluation of enhanced night-time images
CN114693553A (en) Mobile intelligent terminal image processing method and system
CN113298779B (en) Video redirection quality objective evaluation method based on reverse reconstruction grid
Liu et al. Perceptual quality assessment of omnidirectional images: A benchmark and computational model
CN103139591B (en) A kind of 3D vedio color auto-correction method of graphic based processor
CN111641822A (en) Method for evaluating quality of repositioning stereo image
CN109410197B (en) Method and device for positioning detection area of liquid crystal display
CN113110733A (en) Virtual field interaction method and system based on remote duplex
KR101841750B1 (en) Apparatus and Method for correcting 3D contents by using matching information among images
CN110135274B (en) Face recognition-based people flow statistics method
CN103997642A (en) Stereo camera remote convergence shooting quality objective evaluation method
CN114612994A (en) Method and device for training wrinkle detection model and method and device for detecting wrinkles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination