CN109710371A - Font adjusting method, apparatus and system - Google Patents

Font adjusting method, apparatus and system Download PDF

Info

Publication number
CN109710371A
CN109710371A CN201910135199.1A CN201910135199A CN109710371A CN 109710371 A CN109710371 A CN 109710371A CN 201910135199 A CN201910135199 A CN 201910135199A CN 109710371 A CN109710371 A CN 109710371A
Authority
CN
China
Prior art keywords
face image
depth
font
average
mean value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910135199.1A
Other languages
Chinese (zh)
Inventor
廖声洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201910135199.1A priority Critical patent/CN109710371A/en
Publication of CN109710371A publication Critical patent/CN109710371A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The present invention provides a kind of font adjusting methods, apparatus and system, are related to technical field of image processing, this method comprises: obtaining the face image of target object;Acquire the depth information of face image;It is adjusted based on font of the depth information to specified word, font includes one of size, thickness and color of text or a variety of.The present invention can automatically adjust font, manually adjust without user, effectively improve user experience.

Description

Font adjusting method, device and system
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a font adjusting method, apparatus, and system.
Background
The display device is a hardware carrier of a human-computer interaction interface in daily life, such as a mobile phone screen, a wearable device screen and a display screen of a shopping mall. Characters are usually displayed on the existing display equipment, and the sizes, the thicknesses and other fonts of the characters can be adjusted by a user according to requirements. When the user is adjusting the fonts, modes such as button adjustment, slide bar adjustment or font grade setting are mostly adopted, and generally, the fonts are manually adjusted according to the preset adjusting mode of the display device only when the user feels uncomfortable to watch the fonts, so that the operation of the mode of manually adjusting the fonts is complex, and the user experience is poor.
Disclosure of Invention
In view of this, the present invention provides a font adjusting method, apparatus and system, which can automatically adjust a font without manual adjustment by a user, and effectively improve user experience.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides a font adjusting method, including: acquiring a face image of a target object; collecting depth information of the face image; and adjusting the font of the specified character based on the depth information, wherein the font comprises one or more of the size, the thickness and the color of the character.
Further, the step of acquiring depth information of the face image includes: acquiring the depth value of each pixel point in the face image through a depth sensor; and determining the depth value of each pixel point in the face image as the depth information of the face image.
Further, the step of adjusting the font of the specified text based on the depth information includes: calculating the average depth value of the face image based on the depth value of each pixel point in the face image; searching a font corresponding to the depth average value from a preset association table; wherein, the association table stores the corresponding relation between the depth average value and the font; and carrying out font adjustment on the specified characters according to the searched fonts.
Further, the step of calculating the average depth value of the face image based on the depth values of the pixels in the face image includes: and carrying out mean value calculation on the depth values of all pixel points in the face image, and determining the obtained mean value as the depth mean value of the face image.
Further, the step of calculating the average depth value of the face image based on the depth values of the pixels in the face image includes: and carrying out mean value calculation on the depth values of all pixel points positioned in the eye region in the face image, and determining the obtained mean value as the depth mean value of the face image.
Further, the step of calculating the average depth value of the face image based on the depth values of the pixels in the face image includes: carrying out mean value calculation on the depth values of all pixel points in the face image to obtain a first mean value; carrying out mean value calculation on the depth values of all pixel points positioned in the eye region in the face image to obtain a second mean value; and generating a depth average value of the face image according to the first average value and the second average value.
Further, the step of calculating the mean value of the depth values of the pixels in the face image to obtain a first mean value includes: carrying out mean value calculation on the depth values of all pixel points in the face image according to a first mean value calculation formula to obtain a first mean value; wherein the first mean value calculation formula is:
wherein,is the first average, (x0, y0, width0, height0) is the position parameter of the face image, Dxi,yiAnd the depth values of the pixel points xi and yi in the face image are obtained.
Further, the step of generating a depth average of the face image according to the first average and the second average includes: generating a depth average value of the face image according to the first average value, the second average value and a second average value calculation formula; wherein the second mean value calculation formula is:
wherein,is the average of the depths of the face image,is the first average value of the first average value,and k is the weight of the second average value, and k is more than 0.5 and less than 1.
Further, the setting method of the association table comprises the following steps: determining the corresponding relation between the depth average value and the font by adopting a font level calculation formula; wherein, the font level calculation formula is as follows:
wherein, L is the level of the font, and the level comprises one or more of the size level, the thickness level and the color level of the characters;for depth average, width0 and height0 are the width and height, respectively, of the face image.
In a second aspect, an embodiment of the present invention further provides a font adjusting apparatus, where the apparatus includes: the image acquisition module is used for acquiring a face image of the target object; the depth information acquisition module is used for acquiring the depth information of the face image; and the font adjusting module is used for adjusting the font of the specified character based on the depth information, wherein the font comprises one or more of the size, thickness and color of the character.
In a third aspect, an embodiment of the present invention provides a font adjusting system, where the system includes: the device comprises an image acquisition device, a processor and a storage device; the image acquisition device is used for acquiring a face image of the target object; the storage means has stored thereon a computer program which, when executed by the processor, performs the method of any of the first aspects.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps of the method according to any one of the above first aspects.
The embodiment of the invention provides a font adjusting method, a font adjusting device and a font adjusting system, which can firstly acquire a face image of a target object, collect depth information of the face image and then adjust the font (one or more of size, thickness and color) of a specified character based on the depth information. Because the distance between the face of the target object (such as a user) and the electronic equipment for executing the font adjusting method is related to the depth information, the font adjusting mode according to the depth information can better meet the user requirements, and the user experience is effectively improved without manual adjustment of the user.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the above-described technology of the disclosure.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
FIG. 2 is a flow chart of a font adjustment method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a depth image provided by an embodiment of the invention;
fig. 4 shows a schematic diagram of face key point annotation provided in the embodiment of the present invention;
FIG. 5 illustrates a schematic diagram of the operation of a TOF sensor provided by an embodiment of the present invention;
fig. 6 shows a block diagram of a font adjusting apparatus according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In consideration of the fact that the existing manual font adjustment mode is complex to operate and poor in user experience, the font adjustment method, the font adjustment device and the font adjustment system provided by the embodiment of the invention can be applied to intelligent terminals such as smart phones, tablet computers, wearable devices, computers and advertising screens, and for convenience of understanding, detailed descriptions are given below to the embodiment of the invention.
The first embodiment is as follows:
first, an example electronic device 100 for implementing the font adjusting method, apparatus and system of the embodiments of the present invention is described with reference to fig. 1.
As shown in fig. 1, an electronic device 100 includes one or more processors 102, one or more memory devices 104, an input device 106, an output device 108, and an image capture device 110, which are interconnected via a bus system 112 and/or other type of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processor 102 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image capture device 110 may take images (e.g., photographs, videos, etc.) desired by the user and store the taken images in the storage device 104 for use by other components.
For example, an example electronic device for implementing a font adjustment method, apparatus and system according to embodiments of the present invention may be implemented on a smart terminal such as a smart phone, a tablet computer, a wearable device, a computer, and an advertisement screen.
Example two:
referring to a flowchart of a font adjusting method shown in fig. 2, the method can be applied to the electronic device provided in the previous embodiment, and the method specifically includes the following steps:
in step S202, a face image of the target object is acquired.
In the present embodiment, the face image of the target object may be acquired by an image acquisition device such as an image sensor; wherein the target object may be a person. The face image may be an image directly captured by the image capturing device and including only a face region, or may be an image of a face region cut out from a complete original image of the target object captured by the image capturing device.
Step S204, collecting the depth information of the face image.
In an alternative embodiment, the depth information may be obtained based on a face depth image corresponding to the face image. For ease of understanding, reference may be made to the depth image diagram shown in fig. 3, which symbolically illustrates an image that is capable of characterizing depth information. The depth image may also be referred to as a range image, and refers to an image in which a distance (depth) from an electronic device to each point on a target object (i.e., a person) is used as a pixel depth value, and grayscale may be understood as a color depth of a pixel point in a black-and-white image. As can be seen from fig. 3, in the depth image corresponding to the original image, the gray values of the pixels corresponding to the pixels located at different distances are different, such as the farther the distance is, the lighter the color is. Of course, red, green and blue colors with different shades can be used to embody depth images with different distance information. In specific implementation, a face depth image corresponding to the face image may be collected first; and then calculating the depth information of the face image by coordinate conversion of the face depth image through a method for estimating the image depth. Among them, methods Of estimating the image depth include methods such as a binocular stereo vision method, a TOF (Time Of Flight) method, a structured light method, and a laser scanning method.
And step S206, adjusting the font of the specified character based on the depth information, wherein the font comprises one or more of the size, thickness and color of the character. The designated characters comprise characters displayed on a display interface of the electronic equipment and the like.
In practical applications, in order to provide a comfortable visual experience for the target object, the font of the designated text can be controlled to change by the depth information representing the distance between the target object and the electronic device. The farther the distance between the target object and the electronic equipment is, the larger, the thicker and/or the darker the font of the designated character is, and the closer the distance between the target object and the electronic equipment is, the smaller, the thinner and/or the lighter the font of the designated character is, so that the consistency effect of the target object in watching the display interface of the electronic equipment in each distance range is achieved.
The font adjusting method provided by the embodiment of the invention can firstly acquire the face image of the target object, collect the depth information of the face image and then adjust the font (one or more of the size, thickness and color) of the specified characters based on the depth information. Because the distance between the face of the target object (such as a user) and the electronic equipment for executing the font adjusting method is related to the depth information, the font adjusting mode according to the depth information can better meet the user requirements, and the user experience is effectively improved without manual adjustment of the user.
In an alternative embodiment, the above process of acquiring the face image of the target object may be performed with reference to the following first step and second step:
step one, a preview frame image is collected through an image sensor.
In this embodiment, the image sensor may be first activated in response to the font adjustment instruction, and then the image sensor may capture a preview frame image of the target object. The font adjustment instruction may be font adjustment request information input to the electronic device by a user through a screen touch, a key operation, a voice operation, or the like, such as turning on a preset "font adjustment" function button. In response to the font adjustment instruction, the electronic device may turn on an image sensor, such as an RGB sensor, that captures a preview frame image of the target object.
And step two, detecting whether the preview frame image contains the face of the target object.
In particular implementations, the preview frame image may first be input into a trained face detection model (such as resnet34, etc.). Then, carrying out face detection on the preview frame image through a face detection model, and judging whether a face exists in the preview frame image or not; if not, ending the image detection; if yes, outputting the position parameter of the face area on the preview frame image and the position parameter of the key point of the human eye in the face area. Wherein the position parameters of the face region may be expressed as Rect (x0, y0, width0, height0), (x0, y0) indicating coordinates of a specified point of the face region, such as coordinates of the top left vertex of the face region and coordinates of the center point of the face region, and width0 and height0 indicating a width value and a height value of the face region, respectively; the human eye key point can be represented by a pixel point, and the human eye key point position parameter is represented by { (xL, yL), (xR, yR) }, (xL, yL) is the pixel point position of the left eye key point, and (xR, yR) is the pixel point position of the right eye key point; alternatively, the eye key points may be represented by eye regions, the eye key point position parameter may be represented by Rect { (xL, yL, width1, height1), (xR, yR, width2, height2) }, and the following description will be made by taking Rect { (xL, yL, width1, height1) as an example: (xL, yL) denotes coordinates of a designated point of the left-eye region, such as the coordinates of the top left point of the left-eye region and the coordinates of the center point of the left-eye region, and width1 and height1 denote the width value and height value of the left-eye region, respectively. And finally, determining the face image of the target object according to the position parameters of the face area.
The face detection model can be obtained by the following training method shown in the steps 1) to 5):
step 1): a specified number (e.g., 10 ten thousand) of face images are collected and stored in a database.
Step 2): and carrying out accurate annotation on the face key points of the face image, wherein the schematic diagram is annotated by referring to the face key points as shown in FIG. 4, and the face key points comprise eye contour points, nose contour points, upper lip contour points, lower lip contour points and the like.
Step 3): and dividing the labeled face key points into a training set, a verification set and a test set according to a certain proportion.
Step 4): and performing model training on the training set, verifying an intermediate result in the training process by using the verification set, adjusting training parameters in real time, and stopping training when the training precision and the verification precision reach certain thresholds to obtain a trained face detection model.
Step 5): and testing the face detection model obtained in the last step by using a test set to measure the performance and the capability of the face detection model.
The embodiment provides a specific implementation manner of acquiring depth information of a face image, including: acquiring the depth value of each pixel point in the face image through a depth sensor; and determining the depth value of each pixel point in the face image as the depth information of the face image. In particular implementations, the depth information of the face image may also be further divided according to different regions of the face image, such as: the depth values of the pixel points in the whole area of the face image form global depth information, and the depth values of the pixel points in the eye area of the face image form eye depth information and the like.
Based on the step of detecting whether the preview frame image contains the face of the target object, the depth sensor can be started after the preview frame image is judged to contain the face; therefore, the occupancy rate of hardware resources can be reduced; the depth sensor may comprise a TOF sensor. For ease of understanding, reference may be made to the following specific steps (1) and (2):
(1) and starting the TOF sensor after acquiring the face image of the target object based on the font adjusting instruction. Wherein, the TOF sensor can be embedded inside the electronic device.
(2) And acquiring a depth image of the face image through the TOF sensor, and acquiring depth information through the depth image. Referring to the schematic diagram of the operation principle of the TOF sensor shown in fig. 5, a main logic module in the TOF sensor sends a pulse to trigger a light source to emit modulated infrared light, the emitted infrared light encounters a target object (i.e., a person) and is reflected back to a photodetector, and the photodetector calculates the distance between the target object and the TOF sensor by calculating the time difference or phase difference between the emitted and reflected light, and converts the distance into depth information of a face image. In practical application, the TOF sensor may acquire global depth information of a preview frame image where the face image is located, and in this embodiment, only local depth information corresponding to the face image is applied.
Based on the depth information, the embodiment provides a specific implementation process for adjusting the font of the specified text, and may refer to the following steps a and B:
and step A, calculating the average depth value of the face image based on the depth value of each pixel point in the face image.
There are various ways to calculate the depth average of the face image, such as the following three ways:
the first method is as follows: carrying out mean value calculation on the depth values of all pixel points in the face image, and determining the obtained mean value as the depth mean value of the face image; in a specific implementation, the depth value D of each pixel xi, yi in the face image may be calculated according to a first mean value calculation formula shown in formula (1) based on the position parameter Rect (x0, y0, width0, height0) of the face imagexi,yiCarrying out mean value calculation to obtain a depth mean value of the face image; where (x0, y0) indicates the coordinates of the lower left vertex of the face region, xi is in the range of (x0, x0+ width0) and yi is in the range of (y0, y0+ height 0).
Wherein,is the depth average of the face image.
The second method comprises the following steps: and carrying out mean value calculation on the depth values of all pixel points positioned in the eye region in the face image, and determining the obtained mean value as the depth mean value of the face image.
When the method is specifically implemented, the depth mean value of each pixel point of the left eye in the face image can be calculated respectivelyAnd each pixel point of the right eyeDepth mean ofWhen the left eye is represented by only one human eye key point (namely, a pixel point), the depth mean value corresponding to the left eye is directly equal to the depth value of the pixel point. When the left eye is determined by using the position parameter Rect (xL, yL, width1, height1) of the left eye region in the face image, the depth mean value corresponding to the left eye can be determined by referring to the formula shown in formula (1) for the depth value D of each pixel xi, yi of the left eye region in the face imagexi,yiCarrying out mean value calculation to obtain a depth mean value corresponding to the left eye; where (xL, xL) represents the lower left vertex coordinates of the left eye region, the range of xi is (xL, xL + width1), and the range of yi is (xL, xL + height 1). It can be understood that the calculation method of the depth mean corresponding to the right eye is the same as that of the left eye, and is not described herein again.
Then, calculating the depth mean value of each pixel point of the eye region of the two eyes according to the formula (2):
wherein,the depth average value of each pixel point in the eye region, that is, the depth average value of the face image determined by the method.
The third method comprises the following steps: firstly, the depth values of all pixel points in a face image are subjected to mean value calculation in a reference mode to obtain a first mean value
Then, the depth values of all pixel points positioned in the eye region in the face image are subjected to mean value calculation in a reference mode to obtain a second mean value
And finally, generating the depth average value of the face image according to the first average value and the second average value. In particular implementations, the first average value may be usedSecond mean valueAnd a second mean calculation formula as shown in formula (3) generates a depth mean of the face image:
wherein,is the depth average of the face image, k is the weight of the second average; since the human eye experiences fonts more directly than the human face, the range of k can be set as: k is more than 0.5 and less than 1.
Of course, the above three ways are merely exemplary illustrations of ways of determining the depth average of the face image, and should not be construed as limitations.
B, searching a font corresponding to the depth average value from a preset association table based on the depth average value of the face image determined in any one mode; and the association table stores the corresponding relation between the depth average value and the font. The corresponding relation is mainly embodied as follows: the larger the depth average value is, the more prominent (or more obvious) the corresponding font is, that is, the farther the distance between the target object and the electronic device is, the larger, thicker and/or darker the font is in practical application; in order to adjust the font conveniently, the following font level calculation formula (4) can be used to set the associated report, that is, the average depth value is determinedCorrespondence with fonts:
wherein L is the average value of depthThe level of the fonts with the corresponding relation comprises one or more of the size level, the thickness level and the color level of the characters; width0 and height0 are the width and height, respectively, of the face image. It can also be seen from the equation (4) that the average depth value of the face image having a width of 0 and a height of 0And the corresponding relation is in direct proportion to the level L of the font to be adjusted.
In the correspondence relationship determined by the formula (4), each depth average valueAll correspond to the grade of an adaptive font, in order to simplify the font regulation process and meet the visual comfort requirement of a user, the grades of various fonts can be set based on the range of the depth average value, the depth average value in a certain range is associated with the grade of each font, and the grade and the depth average value of the font with the association relationship are stored in an association table. Wherein different size levels, different thickness levels, and different color shade levels of a font may be combined to determine the levels of the plurality of fonts. For example, assume that the size level of a font is set to one size to eight sizes, and the thickness level of the font is set to standard, regular, medium, thick, and extra thick. In one embodiment, the following font levels may be set according to different sizes and different thicknesses of the font: one level corresponding to a font size of six and a font thickness of standard, and a second level corresponding to a font size of five and a font size of smallThe thickness is conventional, three levels correspond to a font size of five and a font thickness of medium, four levels correspond to a font size of small four and a font thickness of thick, and five levels correspond to a font size of four and a font thickness of extra thick.
And C, adjusting the fonts of the appointed characters according to the searched fonts. Referring to the step B, the font grades are used to represent different fonts, the found font grade is the font grade in this embodiment, and one or more of the size, thickness and color of the designated characters are adjusted according to the found font grade.
In addition, in order to ensure that the target object is in a better visual comfort level without starting the font adjusting function, an initial value of the font may be set or a use habit of the target object on the font may be detected.
Wherein, the initial value of the set font can be understood as: the most comfortable level of the target object when the display interface of the electronic equipment is viewed at a fixed distance is set based on the level of the fonts, for example, when a person views a computer, the font level is the most comfortable when the face is 45cm away from the display interface of the computer, and the three-level fonts are used as the initial values of the font display on the display interface of the computer.
Detecting the usage habit of the target object on the font can be understood as: and acquiring the font adjustment results of the appointed times (such as 50 times), counting the font adjustment results, and determining the font adjustment result with the highest use frequency as the habitual use font of the target object.
In summary, the font adjusting method provided in the above embodiment can adjust the font (one or more of size, thickness, and color) of the designated text based on the collected depth information of the face image. Because the distance between the face of the target object (such as a user) and the electronic equipment for executing the font adjusting method is related to the depth information, the font adjusting mode according to the depth information can better meet the user requirements, and the user experience is effectively improved without manual adjustment of the user.
Example three:
for the font adjusting method provided in the second embodiment, an embodiment of the present invention provides a font adjusting apparatus, referring to a block diagram of a structure of the font adjusting apparatus shown in fig. 6, where the apparatus includes:
an image obtaining module 602, configured to obtain a face image of the target object.
And a depth information collecting module 604 for collecting depth information of the face image.
And a font adjusting module 606, configured to adjust a font of the specified text based on the depth information, where the font includes one or more of a size, a thickness, and a color of the text.
The font adjusting device provided by the embodiment of the invention can firstly acquire the face image of the target object, collect the depth information of the face image, and then adjust the font (one or more of the size, thickness and color) of the specified characters based on the depth information. Because the distance between the face of the target object (such as a user) and the electronic equipment for executing the font adjusting method is related to the depth information, the font adjusting mode according to the depth information can better meet the user requirements, and the user experience is effectively improved without manual adjustment of the user.
In an embodiment, the depth information acquiring module 604 is further configured to: acquiring the depth value of each pixel point in the face image through a depth sensor; and determining the depth value of each pixel point in the face image as the depth information of the face image.
In an embodiment, the font adjusting module 606 is further configured to: calculating the average depth value of the face image based on the depth value of each pixel point in the face image; searching a font corresponding to the depth average value from a preset association table; wherein, the association table stores the corresponding relation between the depth average value and the font; and carrying out font adjustment on the specified characters according to the searched fonts.
In an embodiment, the font adjusting module 606 is further configured to: and carrying out mean value calculation on the depth values of all pixel points in the face image, and determining the obtained mean value as the depth mean value of the face image.
In an embodiment, the font adjusting module 606 is further configured to: and carrying out mean value calculation on the depth values of all pixel points positioned in the eye region in the face image, and determining the obtained mean value as the depth mean value of the face image.
In an embodiment, the font adjusting module 606 is further configured to: carrying out mean value calculation on the depth values of all pixel points in the face image to obtain a first mean value; carrying out mean value calculation on the depth values of all pixel points positioned in the eye region in the face image to obtain a second mean value; and generating a depth average value of the face image according to the first average value and the second average value.
In an embodiment, the font adjusting module 606 is further configured to: carrying out mean value calculation on the depth values of all pixel points in the face image according to a first mean value calculation formula to obtain a first mean value; wherein, the first mean value calculation formula is:
wherein,as a first average, (x0, y0, width0, height0) as a position parameter of the face image, Dxi,yiThe depth values of the pixel points xi and yi in the face image are obtained.
In an embodiment, the font adjusting module 606 is further configured to: generating a depth average value of the face image according to the first average value, the second average value and a second average value calculation formula; wherein, the second mean value calculation formula is:
wherein,is the average of the depths of the face image,is a first average value of the first average value,is the second average value, k is the weight of the second average value, and k is more than 0.5 and less than 1.
In an embodiment, the font adjusting module 606 is further configured to: determining the corresponding relation between the depth average value and the font by adopting a font level calculation formula; wherein, the font level calculation formula is as follows:
wherein, L is the level of the font, and the level comprises one or more of the size level, the thickness level and the color level of the characters;for depth average, width0 and height0 are the width and height, respectively, of the face image.
The device provided in this embodiment has the same implementation principle and technical effects as those of the foregoing embodiment, and for the sake of brief description, reference may be made to corresponding contents in the foregoing embodiment.
Example four:
based on the foregoing embodiment, this embodiment provides a font adjusting system, which includes: the device comprises an image acquisition device, a processor and a storage device; the image acquisition device is used for acquiring a face image of the target object; the storage means has stored thereon a computer program which, when executed by the processor, performs the method according to embodiment two.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the system described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Further, the present embodiment also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processing device, the computer program performs the steps of any one of the methods provided in the second embodiment.
The computer program product of the font adjusting method, apparatus and system provided in the embodiments of the present invention includes a computer readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and will not be described herein again.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (12)

1. A font adjustment method, comprising:
acquiring a face image of a target object;
collecting depth information of the face image;
and adjusting the font of the specified character based on the depth information, wherein the font comprises one or more of the size, the thickness and the color of the character.
2. The method of claim 1, wherein the step of acquiring depth information of the facial image comprises:
acquiring the depth value of each pixel point in the face image through a depth sensor;
and determining the depth value of each pixel point in the face image as the depth information of the face image.
3. The method of claim 2, wherein the step of adjusting the font of the specified text based on the depth information comprises:
calculating the average depth value of the face image based on the depth value of each pixel point in the face image;
searching a font corresponding to the depth average value from a preset association table; wherein, the association table stores the corresponding relation between the depth average value and the font;
and carrying out font adjustment on the specified characters according to the searched fonts.
4. The method of claim 3, wherein the step of calculating the average depth value of the face image based on the depth values of the pixels in the face image comprises:
and carrying out mean value calculation on the depth values of all pixel points in the face image, and determining the obtained mean value as the depth mean value of the face image.
5. The method of claim 3, wherein the step of calculating the average depth value of the face image based on the depth values of the pixels in the face image comprises:
and carrying out mean value calculation on the depth values of all pixel points positioned in the eye region in the face image, and determining the obtained mean value as the depth mean value of the face image.
6. The method of claim 3, wherein the step of calculating the average depth value of the face image based on the depth values of the pixels in the face image comprises:
carrying out mean value calculation on the depth values of all pixel points in the face image to obtain a first mean value;
carrying out mean value calculation on the depth values of all pixel points positioned in the eye region in the face image to obtain a second mean value;
and generating a depth average value of the face image according to the first average value and the second average value.
7. The method of claim 6, wherein the step of averaging the depth values of the pixels in the face image to obtain a first average value comprises:
carrying out mean value calculation on the depth values of all pixel points in the face image according to a first mean value calculation formula to obtain a first mean value; wherein the first mean value calculation formula is:
wherein,is the first average, (x0, y0, width0, height0) is the position parameter of the face image, Dxi,yiAnd the depth values of the pixel points xi and yi in the face image are obtained.
8. The method of claim 6, wherein the step of generating the depth average of the face image from the first average and the second average comprises:
generating a depth average value of the face image according to the first average value, the second average value and a second average value calculation formula; wherein the second mean value calculation formula is:
wherein,is the average of the depths of the face image,is the first average value of the first average value,and k is the weight of the second average value, and k is more than 0.5 and less than 1.
9. The method according to claim 3, wherein the setting method of the association table comprises:
determining the corresponding relation between the depth average value and the font by adopting a font level calculation formula; wherein, the font level calculation formula is as follows:
wherein, L is the level of the font, and the level comprises one or more of the size level, the thickness level and the color level of the characters;for depth average, width0 and height0 are the width and height, respectively, of the face image.
10. A font adjustment apparatus, the apparatus comprising:
the image acquisition module is used for acquiring a face image of the target object;
the depth information acquisition module is used for acquiring the depth information of the face image;
and the font adjusting module is used for adjusting the font of the specified character based on the depth information, wherein the font comprises one or more of the size, thickness and color of the character.
11. A font adjustment system, the system comprising: the device comprises an image acquisition device, a processor and a storage device;
the image acquisition device is used for acquiring a face image of the target object;
the storage device has stored thereon a computer program which, when executed by the processor, performs the method of any of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of the preceding claims 1 to 9.
CN201910135199.1A 2019-02-20 2019-02-20 Font adjusting method, apparatus and system Pending CN109710371A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910135199.1A CN109710371A (en) 2019-02-20 2019-02-20 Font adjusting method, apparatus and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910135199.1A CN109710371A (en) 2019-02-20 2019-02-20 Font adjusting method, apparatus and system

Publications (1)

Publication Number Publication Date
CN109710371A true CN109710371A (en) 2019-05-03

Family

ID=66264948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910135199.1A Pending CN109710371A (en) 2019-02-20 2019-02-20 Font adjusting method, apparatus and system

Country Status (1)

Country Link
CN (1) CN109710371A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110275749A (en) * 2019-06-19 2019-09-24 广东乐芯智能科技有限公司 A kind of method of surface amplification display
CN110377385A (en) * 2019-07-05 2019-10-25 深圳壹账通智能科技有限公司 A kind of screen display method, device and terminal device
CN110413955A (en) * 2019-07-30 2019-11-05 北京小米移动软件有限公司 Word resets section method, apparatus, terminal and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331492A (en) * 2016-08-29 2017-01-11 广东欧珀移动通信有限公司 Image processing method and terminal
CN106326867A (en) * 2016-08-26 2017-01-11 维沃移动通信有限公司 Face recognition method and mobile terminal
CN107452034A (en) * 2017-07-31 2017-12-08 广东欧珀移动通信有限公司 Image processing method and its device
CN107515844A (en) * 2017-07-31 2017-12-26 广东欧珀移动通信有限公司 Font method to set up, device and mobile device
CN107977636A (en) * 2017-12-11 2018-05-01 北京小米移动软件有限公司 Method for detecting human face and device, terminal, storage medium
CN109327626A (en) * 2018-12-12 2019-02-12 Oppo广东移动通信有限公司 Image-pickup method, device, electronic equipment and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326867A (en) * 2016-08-26 2017-01-11 维沃移动通信有限公司 Face recognition method and mobile terminal
CN106331492A (en) * 2016-08-29 2017-01-11 广东欧珀移动通信有限公司 Image processing method and terminal
CN107452034A (en) * 2017-07-31 2017-12-08 广东欧珀移动通信有限公司 Image processing method and its device
CN107515844A (en) * 2017-07-31 2017-12-26 广东欧珀移动通信有限公司 Font method to set up, device and mobile device
CN107977636A (en) * 2017-12-11 2018-05-01 北京小米移动软件有限公司 Method for detecting human face and device, terminal, storage medium
CN109327626A (en) * 2018-12-12 2019-02-12 Oppo广东移动通信有限公司 Image-pickup method, device, electronic equipment and computer readable storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110275749A (en) * 2019-06-19 2019-09-24 广东乐芯智能科技有限公司 A kind of method of surface amplification display
CN110275749B (en) * 2019-06-19 2022-03-11 深圳顺盈康医疗设备有限公司 Surface amplifying display method
CN110377385A (en) * 2019-07-05 2019-10-25 深圳壹账通智能科技有限公司 A kind of screen display method, device and terminal device
CN110377385B (en) * 2019-07-05 2022-06-21 深圳壹账通智能科技有限公司 Screen display method and device and terminal equipment
CN110413955A (en) * 2019-07-30 2019-11-05 北京小米移动软件有限公司 Word resets section method, apparatus, terminal and storage medium
CN110413955B (en) * 2019-07-30 2023-04-07 北京小米移动软件有限公司 Character readjusting method, device, terminal and storage medium

Similar Documents

Publication Publication Date Title
US11037281B2 (en) Image fusion method and device, storage medium and terminal
US11113842B2 (en) Method and apparatus with gaze estimation
CN106415445B (en) Techniques for viewer attention area estimation
EP2902941B1 (en) System and method for visually distinguishing faces in a digital image
US10110868B2 (en) Image processing to determine center of balance in a digital image
CN106157273B (en) Method and device for generating composite picture
US20210041945A1 (en) Machine learning based gaze estimation with confidence
US10523916B2 (en) Modifying images with simulated light sources
WO2016107638A1 (en) An image face processing method and apparatus
US9256950B1 (en) Detecting and modifying facial features of persons in images
CN111008935B (en) Face image enhancement method, device, system and storage medium
CN109710371A (en) Font adjusting method, apparatus and system
CN105763829A (en) Image processing method and electronic device
JP5726646B2 (en) Image processing apparatus, method, and program
WO2021046773A1 (en) Facial anti-counterfeiting detection method and apparatus, chip, electronic device and computer-readable medium
US8908994B2 (en) 2D to 3d image conversion
TW201820263A (en) Method for adjusting the aspect ratio of the display and display device thereof
US20160110909A1 (en) Method and apparatus for creating texture map and method of creating database
US10733706B2 (en) Mobile device, and image processing method for mobile device
CN112700568A (en) Identity authentication method, equipment and computer readable storage medium
JP7233631B1 (en) posture improvement system
US12081722B2 (en) Stereo image generation method and electronic apparatus using the same
EP2590417A1 (en) Stereoscopic image display apparatus
CN112114659A (en) Method and system for determining a fine point of regard for a user
CN111444979A (en) Face-lifting scheme recommendation method, cloud device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190503