CN110765847B - Font adjustment method, device, equipment and medium based on face recognition - Google Patents

Font adjustment method, device, equipment and medium based on face recognition Download PDF

Info

Publication number
CN110765847B
CN110765847B CN201910841750.4A CN201910841750A CN110765847B CN 110765847 B CN110765847 B CN 110765847B CN 201910841750 A CN201910841750 A CN 201910841750A CN 110765847 B CN110765847 B CN 110765847B
Authority
CN
China
Prior art keywords
distance
face
font
value
distance value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910841750.4A
Other languages
Chinese (zh)
Other versions
CN110765847A (en
Inventor
温桂龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910841750.4A priority Critical patent/CN110765847B/en
Priority to PCT/CN2019/116945 priority patent/WO2021042518A1/en
Publication of CN110765847A publication Critical patent/CN110765847A/en
Application granted granted Critical
Publication of CN110765847B publication Critical patent/CN110765847B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Collating Specific Patterns (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a font adjusting method, a device, equipment and a medium based on face recognition, wherein the method comprises the steps of adopting a camera on an intelligent terminal to acquire a first face image of the current time of a system, wherein the first face image corresponds to a user identifier; and carrying out face matching on the first face image based on the historical face image in the face database corresponding to the user identification, and obtaining a face matching result. If the face matching result is that the matching is failed, continuously acquiring a second face image corresponding to each moment by adopting the camera; performing action analysis on the second face images corresponding to the two adjacent moments to obtain action analysis results; and dynamically adjusting the current fonts displayed on the screen of the intelligent terminal based on the action analysis result, wherein the font adjustment method does not need manual adjustment of a user, reduces the complexity of font adjustment, and improves the font adjustment efficiency.

Description

Font adjustment method, device, equipment and medium based on face recognition
Technical Field
The present invention relates to the field of face recognition technologies, and in particular, to a font adjustment method, device, equipment, and medium based on face recognition.
Background
Along with popularization and use of the smart phone, the application range of the application program is enlarged. At present, since the mobile phone system and the application program are universal, in order to adapt to the demands of different people, a mobile phone system manufacturer generally adds a font size adjusting function into the mobile phone system, and a part of APP developers also provide an APP internal font size adjusting function in the APP, and simultaneously provide a default font size. However, for users with partially defective vision (e.g., elderly people with presbyopia or far/near vision) the default fonts may not be suitable for such people without glasses, so frequent manual font adjustment is required, resulting in inconvenient font adjustment.
Disclosure of Invention
The embodiment of the invention provides a font adjusting method, a font adjusting device, computer equipment and a storage medium based on face recognition, which are used for solving the problem that the conventional font adjustment is inconvenient because the font adjustment can only be manually adjusted by a user.
A font adjustment method based on face recognition comprises the following steps:
a camera on the intelligent terminal is adopted to collect a first face image of the current time of the system, and the first face image corresponds to a user identifier;
And carrying out face matching on the first face image based on the historical face image in the face database corresponding to the user identification, and obtaining a face matching result.
If the face matching result is that the matching is failed, continuously acquiring a second face image corresponding to each moment by adopting the camera;
performing action analysis on the second face images corresponding to the two adjacent moments to obtain action analysis results;
and dynamically adjusting the current font displayed on the screen of the intelligent terminal based on the action analysis result.
A font adjustment device based on face recognition, comprising:
the system comprises a first face image acquisition module, a second face image acquisition module and a first face image acquisition module, wherein the first face image acquisition module is used for acquiring a first face image of the current time of the system by adopting a camera on an intelligent terminal, and the first face image corresponds to a user identifier;
the face matching result obtaining module is used for carrying out face matching on the first face image based on the historical face image in the face database corresponding to the user identification, and obtaining a face matching result.
The second face image acquisition module is used for continuously acquiring a second face image corresponding to each moment by adopting the camera if the face matching result is that the matching fails;
The action analysis result acquisition module is used for carrying out action analysis on the second face images corresponding to the two adjacent moments to acquire action analysis results;
and the font dynamic adjustment module is used for dynamically adjusting the current font displayed on the screen of the intelligent terminal based on the action analysis result.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the above-described face recognition based font adjustment method when the computer program is executed.
A computer storage medium storing a computer program which, when executed by a processor, implements the steps of the font adjustment method based on face recognition described above.
In the font adjusting method, the font adjusting device, the computer equipment and the storage medium based on face recognition, the camera on the intelligent terminal is adopted to collect the first face image of the current time of the system, so that face matching is carried out on the first face image based on the historical face image in the face database corresponding to the user identification, and a face matching result is obtained. And judging the face matching result, if the face matching result is failed, continuously acquiring second face images corresponding to each moment by adopting a camera so as to perform action analysis on the second face images corresponding to two adjacent moments, and determining the action characteristics of the current user to further determine the action analysis result of the current user. And by performing motion analysis on the second face images corresponding to the two adjacent moments, the current motion change of the user is analyzed from the angle of the images, and the accuracy is higher. And finally, dynamically adjusting the current font displayed on the screen of the intelligent terminal based on the action analysis result without manual adjustment, so as to realize the purpose of dynamically adjusting the current font.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an application environment of a font adjustment method based on face recognition according to an embodiment of the present invention;
FIG. 2 is a flow chart of a font adjustment method based on face recognition in an embodiment of the present invention;
FIG. 3 is a flowchart showing step S40 in FIG. 2;
FIG. 4 is a flowchart showing step S45 in FIG. 3;
FIG. 5 is a flowchart showing step S50 in FIG. 2;
FIG. 6 is a flow chart of a font adjustment method based on face recognition in an embodiment of the present invention;
FIG. 7 is a flowchart showing step S50 in FIG. 2;
FIG. 8 is a flow chart of a font adjustment method based on face recognition in an embodiment of the present invention;
fig. 9 is a schematic diagram of a font adjusting device based on face recognition according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a computer device in accordance with an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The font adjustment method based on face recognition provided by the embodiment of the invention can be applied to computer equipment provided with an application program and used for dynamically adjusting according to the characteristics of different user groups. The font adjustment method based on face recognition can be applied to an application environment as shown in fig. 1, wherein a computer device communicates with a server through a network. The computer devices may be, but are not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. In one embodiment, as shown in fig. 2, a font adjustment method based on face recognition is provided, and the method is applied to the server in fig. 1 for illustration, and includes the following steps:
S10: and acquiring a first face image of the current time of the system by adopting a camera on the intelligent terminal, wherein the first face image corresponds to a user identifier.
The font adjustment method based on face recognition can be applied to different APP (application program) as a plug-in tool, and by being applied to different APP (application program) in a plug-in mode, the development period is shortened, and the operability of the application program is enhanced. Specifically, if the font automatic adjustment function is started, the server invokes a camera on the intelligent terminal to acquire a first face image corresponding to the user identifier at the current time of the system, so as to match the first face image at the current time of the system with a historical face image in a face database corresponding to the user identifier.
It can be understood that the user can start the automatic font adjusting function according to the requirement, and the server can also automatically determine whether to start the automatic font adjusting function according to the current electric quantity condition of the intelligent terminal. Further, the server automatically judges whether to open the automatic font adjusting function according to the current electric quantity condition of the intelligent terminal, and the method comprises the following steps:
s111: executing a command line in a preset script, and acquiring electric quantity information from the intelligent terminal, wherein the electric quantity information comprises the current electric quantity.
S112: if the current electric quantity is greater than the electric quantity threshold, executing step S10 to start the automatic font adjusting function;
s113: if the current electric quantity is larger than the electric quantity threshold value, the automatic font adjusting function is closed.
The preset script is a script which is edited in advance by a developer and used for acquiring the electric quantity information in the terminal equipment. The preset script is a preset script, can be executed for multiple times, does not need manual intervention, and reduces the labor cost. For example, the server may acquire the power information in the terminal device using a preset command line in the script, such as the following command statement adb shell dumpsys battery.
S20: and carrying out face matching on the first face image based on the historical face image in the face database corresponding to the user identification, and obtaining a face matching result.
The face database stores historical face images and historical font adjustment strategies corresponding to the user identifications. It can be understood that if the user is the user who uses the font automatic adjustment function for the first time, it is proved that there is no historical face image corresponding to the user identifier in the face database corresponding to the user identifier, and a matching result of face matching failure is obtained. If the user is not the user using the font automatic adjustment function for the first time, proving that a historical face image corresponding to the user identification exists in a face database corresponding to the user identification, and obtaining a matching result of successful face matching.
S30: if the face matching result is that the matching fails, a camera is adopted to continuously acquire a second face image corresponding to each moment.
Specifically, if the face matching result fails, it is indicated that the first face image cannot be matched with the historical face image in the face database, and it is proved that the face database does not have the user data corresponding to the user identifier, and then the camera needs to be invoked to continuously collect the second face image corresponding to each moment so as to determine the font adjustment strategy corresponding to the user. The font adjustment strategy refers to an adjustment strategy which is determined by different users and is suitable for the users so as to realize the customization of automatic font adjustment. It will be appreciated that the video stream collected in real time corresponds to a time axis, and the point on the time axis is the time, for example, 1s, 2s … …, etc.
S40: and performing action analysis on the second face images corresponding to the two adjacent moments to obtain action analysis results.
Specifically, the motion characteristics of the current user are determined through the change of the second face images corresponding to the adjacent two moments by selecting the second face images corresponding to the adjacent two moments, so that the motion analysis result of the current user is further determined. In this embodiment, the action analysis results include, but are not limited to, maintaining balance, approaching the screen, and moving away from the screen.
S50: and dynamically adjusting the current font displayed on the screen of the intelligent terminal based on the action analysis result.
Specifically, the server continuously collects the second face image corresponding to the user identifier at each moment to judge the action change of the user, and then dynamically adjusts the current font displayed on the screen of the intelligent terminal according to the action change of the user, namely the action analysis result. Further, in this embodiment, a font adjustment range may be set, and the UI interface may adopt a constraint layout to constrain the width of the control, so as to reduce the influence on the layout of the entire UI interface during font adjustment.
In this embodiment, a camera on the intelligent terminal is adopted to collect a first face image of the current time of the system, so as to perform face matching on the first face image based on a historical face image in a face database corresponding to the user identifier, and obtain a face matching result. And judging the face matching result, if the face matching result is failed, continuously acquiring second face images corresponding to each moment by adopting a camera so as to perform action analysis on the second face images corresponding to two adjacent moments, and determining the action characteristics of the current user to further determine the action analysis result of the current user. And by performing motion analysis on the second face images corresponding to the two adjacent moments, the current motion change of the user is analyzed from the angle of the images, and the accuracy is higher. And finally, dynamically adjusting the current font displayed on the screen of the intelligent terminal based on the action analysis result without manual adjustment, so as to realize the purpose of dynamically adjusting the current font.
In an embodiment, as shown in fig. 3, in step S40, motion analysis is performed on second face images corresponding to two adjacent moments to obtain a motion analysis result, which specifically includes the following steps:
s41: and respectively extracting feature points of the second face images corresponding to the two adjacent moments to obtain a first feature set and a second feature set, wherein the first feature points in the first feature set correspond to the second feature points in the second feature set.
Wherein the first feature set comprises at least one first feature point and the second feature set comprises at least one second feature point. Specifically, as the same camera is adopted to collect the second face image, the image sizes are the same, and the algorithms for extracting the feature points of the second face image corresponding to two adjacent moments are the same, the extracted feature points are corresponding in the image, namely, the first feature points in the first feature set correspond to the second feature points in the second feature set, so that the accuracy of subsequent action analysis is ensured, and the interference of other factors is eliminated. The feature points obtained by extracting the second face image by the image feature extraction algorithm are understood as pixel points, and are not limited to face feature points.
Specifically, the action characteristics of the current user are determined through the change of the second face images corresponding to the adjacent two moments by selecting the second face images corresponding to the adjacent two moments. The feature point may be extracted using an image feature extraction algorithm including, but not limited to, HOG feature extraction algorithm, LBP feature extraction algorithm, haar feature extraction algorithm, etc.
S42: any two first feature points in the first feature set are selected as a first target feature set, and two second feature points in the second feature set corresponding to the first target feature set are selected as a second target feature set.
S43: performing distance calculation on two first feature points in at least one first target feature group to obtain at least one first feature point distance corresponding to a first feature set; and performing distance calculation on two second feature points in the at least one second target feature group to obtain at least one second feature point distance corresponding to the second feature set.
It should be noted that, because there may be a large number of first feature points in the first feature set, in this embodiment, a portion of the first feature points in the first feature set may be selected as the target feature set, then any two first feature points in the first feature set may be selected as the first target feature set from the target feature set, and a distance calculation is performed on two first target feature points in at least one first target feature set, so as to obtain at least one first feature point distance corresponding to the first feature set, so as to improve processing efficiency. Correspondingly, selecting a second feature point corresponding to a first feature point in the first target feature group as a second target feature group, and performing distance calculation on two second target feature points in at least one second target feature group to obtain at least one second feature point distance corresponding to a second feature set, wherein for example, the first target feature group is [ T1, T2], and then the second target feature group corresponding to the first target feature group is [ T1, T2].
S44: and respectively counting the first characteristic point distance and the second characteristic point distance to obtain a first distance value and a second distance value.
Specifically, the first feature point distance and the second feature point distance are counted respectively to obtain a first distance value and a second distance value, namely, at least one first feature point distance is subjected to average value obtaining processing to obtain the first distance value; and carrying out average value obtaining processing on at least one second characteristic point distance to obtain a second distance value.
Further, for ease of understanding, steps S41-S44 are illustrated herein:
for example: assume that the camera acquires second face images P1 and P2 at two continuous moments;
1. extracting feature points from P1 to obtain a first feature set S1= { T1, T2 and … Tn }, extracting feature points from P2 to obtain a second feature set S2= { T1, T2 and … Tn }; the first feature point in S1 corresponds to the second feature point in S2, i.e., { T1, T1}, { T2, T2};
2. in the step S1, a plurality of feature points are taken as a target feature set, and then two first feature points are arbitrarily selected from the target feature set to pair two by two to obtain a first target feature set, for example: { [ T1T 2], [ T3T 4], [ TxTy ] }, calculating the distance of the first target feature group in P1, namely { D12 (namely the first feature point distance), D34, font adjustment based on face recognition and Dxy }, and averaging all the obtained first feature point distances to obtain DP1 (namely a first distance value); and selecting second feature points corresponding to the first target feature group from S2, namely { [ t1 t2], [ t3 t4], [ tx ty ] }, and calculating in the same manner as S1 to obtain a first feature point distance { d12 (namely a second feature point distance), d34, font adjustment based on face recognition, dxy } and a second distance value DP2.
It should be noted that, in order to further improve accuracy of the motion analysis, when two first feature points are arbitrarily selected from the target feature set to pair two by two to obtain the first target feature set, if the first target feature set is [ t1 t2], then [ t2 t3] cannot exist at the same time, and if [ t1 t2] and [ t2 t3] exist at the same time, then the feature point t2 will be cancelled in the motion analysis, so that the feature point t2 is not analyzed, and accuracy of the motion analysis is affected.
S45: and comparing and analyzing with a preset distance threshold based on the first distance value and the second distance value to obtain an action analysis result.
Wherein the preset distance threshold is greater than zero. The second distance value is counted as a subtracted number if the first distance value is larger than the second distance value, and the first distance value is counted as a subtracted number if the first distance value is smaller than the second distance value.
Specifically, comparing a distance difference value (namely DP1-DP 2) between the first distance value and the second distance value with a preset distance threshold value, and if the distance difference value between the first distance value and the second distance value is larger than the preset distance threshold value, considering that a user is far away from the screen, and acquiring a motion analysis result and a motion analysis result of the motion analysis result far away from the screen; if the distance difference value (namely DP2-DP 1) between the first distance value and the second distance value is larger than the preset distance threshold value, the user is considered to be approaching the screen, and the action analysis result approaching the screen is obtained.
In this embodiment, feature points are performed by comparing the second face images corresponding to two adjacent moments, so as to perform motion analysis, so that motion changes of the current user are analyzed from the changes of the face images acquired in real time, namely, motion analysis is performed from the image angle, and accuracy is higher.
Further, since there may be a case where a user operates by mistake, such as the elderly, in this embodiment, in order to improve the fault tolerance, a threshold value first distance threshold for determining whether the user has changed from the screen is set.
In an embodiment, as shown in fig. 4, in step S45, a comparison analysis is performed with a preset distance threshold based on the first distance value and the second distance value, to obtain an action analysis result, which specifically includes the following steps:
s451: and if the absolute value of the distance difference value between the first distance value and the second distance value is smaller than the first distance threshold value, acquiring an action analysis result keeping balance.
The preset distance threshold comprises a first distance threshold and a second distance threshold. The first distance threshold is a threshold for determining whether the user needs font dynamic adjustment. The second distance threshold is a threshold for determining a current user action. The first distance threshold is less than the second distance threshold. The difference between the first distance value and the second distance value (smallest as the subtracted number), i.e. the distance difference.
Specifically, if the absolute value of the distance difference between the first distance value and the second distance value is smaller than the first distance value, the screen is considered to be unchanged or the change is smaller and negligible, a balanced action analysis result is obtained, and the situation of font adjustment caused by screen change due to misoperation of a user is avoided by setting a first threshold value, so that the fault tolerance is improved.
S452: if the absolute value of the distance difference between the first distance value and the second distance value is larger than the first distance threshold value, judging whether the first distance value is smaller than the second distance value.
S453: and if the first distance value is smaller than the second distance value and the distance difference value between the second distance value and the first distance value is larger than the second distance threshold value, acquiring an action analysis result close to the screen.
Specifically, if the absolute value of the distance difference between the first distance value and the second distance value is greater than the first distance value, the screen is considered to be changed, whether the first distance value is smaller than the second distance value is further judged, if the first distance value is smaller than the second distance value, and if the distance difference between the second distance value and the first distance value is greater than the second distance threshold, the image pixel proportion is enlarged, and then the action analysis result close to the screen is obtained.
S454: if the first distance value is larger than the second distance value, and the distance difference between the first distance value and the second distance value is larger than the second distance threshold, acquiring an action analysis result far away from the screen.
Specifically, if the first distance value is greater than the second distance value, and the distance difference between the first distance value and the second distance value is greater than the second distance threshold, which is equivalent to reducing the image pixel proportion, the user is considered to be approaching the screen, and the action analysis result approaching the screen is obtained.
In this embodiment, the absolute value of the distance difference between the first distance value and the second distance value is compared with the first distance threshold value, so as to determine whether the current user needs to dynamically adjust the fonts, thereby avoiding the situation of font adjustment caused by screen change due to misoperation of the user and improving fault tolerance. If the absolute value of the distance difference between the first distance value and the second distance value is larger than the first distance threshold value, the action analysis result is obtained by comparing the first distance value with the second distance value, and the accuracy of the action analysis result is ensured.
In one embodiment, as shown in fig. 5, in step S50, the current font displayed on the screen of the intelligent terminal is dynamically adjusted based on the action analysis result, which specifically includes the following steps:
S511: if the action analysis result is far from the screen, the current font displayed on the screen of the intelligent terminal is reduced according to the preset proportion.
The preset proportion may be a user-defined percentage, for example, 10%. Specifically, if the action analysis result is far away from the screen, it proves that the current font displayed on the screen of the intelligent terminal is larger and is not suitable for the user to read or view, and the current font displayed on the screen of the intelligent terminal is reduced according to the preset proportion, so that the font size is dynamically adjusted according to the action change of the user.
S512: and if the action analysis result is that the screen is close to the action analysis result, amplifying the current font displayed on the screen of the intelligent terminal according to a preset proportion.
Specifically, if the action analysis result is that the screen is close to the current font displayed on the screen of the intelligent terminal is smaller and is not suitable for the user to read or view, the current font displayed on the screen of the intelligent terminal is enlarged according to a preset proportion, so that the font size is dynamically adjusted according to the user.
In this embodiment, according to the result of action analysis, the intention of the user to be adjusted is analyzed, so as to dynamically adjust the font size suitable for the user to review according to the action change of the user, so as to achieve the customization of dynamic adjustment, that is, different users have different corresponding font adjustment strategies, and different font adjustment requirements are satisfied.
Further, the user can also adjust the sensitivity of the font adjustment, so as to avoid the trouble of the font adjustment function for the user caused by more misoperation of part of the users.
Specifically, when the second face image is acquired, a preset time interval may be set, so that the server acquires the second face image according to the preset time interval. Understandably, when the user decreases the font sensitivity, the preset time interval is increased, that is, the speed of acquiring the second face image is reduced, so that the historical second face image is acquired according to the adjusted preset time interval, so as to achieve the purpose of dynamically adjusting the font adjustment speed according to the font sensitivity.
In an embodiment, as shown in fig. 6, after step S40, the font adjustment method based on face recognition further includes the following steps:
s41: if the action analysis result is that balance is kept within the preset time length, the current screen distance and the current font size are obtained.
S42: and taking the current screen distance and the current font size as target font adjustment strategies, and storing the target font adjustment strategies in association with the user identification.
The target font adjustment strategy comprises a target screen distance and a target font size, so that adjustment is performed next time according to the target screen distance and the target font size in the target font adjustment strategy, adjustment efficiency is improved, and occupation of resources is reduced. It can be appreciated that if the motion analysis result is that balance is maintained within the preset duration, the target font adjustment policy suitable for the previous user is considered to be acquired, and the acquisition of the second face image may be stopped.
Specifically, if the motion analysis results are balanced within the preset duration, the current screen distance (which can be obtained through a binocular ranging or distance sensor) and the current font size at the moment are considered to be suitable for the user, the current screen distance and the current font size can be used as target font adjustment strategies, and the target font adjustment strategies and the user identification can be stored in a correlated manner.
In this embodiment, by analyzing the action analysis result within the preset duration, it is determined whether the current screen distance (which may be obtained through a binocular ranging or distance sensor) and the current font size are suitable for the user, so that the current screen distance and the current font size are stored in association with the user identifier as the target font adjustment policy, so that the adjustment is performed directly according to the target screen distance and the target font size in the target font adjustment policy, the adjustment efficiency is improved, and the occupation of resources is reduced.
In one embodiment, as shown in fig. 7, in step S50, the current font displayed on the screen of the intelligent terminal is dynamically adjusted based on the result of the action analysis.
S521: and acquiring the moving speed of the intelligent terminal in real time.
S522: and determining the font adjusting speed according to the moving speed of the intelligent terminal.
S523: and dynamically adjusting the current font according to the font adjusting speed based on the action analysis result.
Specifically, when the dynamic adjustment is performed, the moving speed of the intelligent terminal is also acquired in real time, and the font adjustment speed is determined according to the moving speed of the intelligent terminal, so that the server dynamically adjusts the current font according to the font adjustment speed based on the action analysis result.
In this embodiment, the font adjustment speed is determined according to the movement speed of the intelligent terminal, so that the current font is dynamically adjusted according to the font adjustment speed based on the action analysis result, so that the font adjustment method is more generalized.
In an embodiment, as shown in fig. 8, after step S20, the font adjustment method based on face recognition further includes the following steps:
s221: if the face matching result is that the matching is successful, the first face image is input into a feature detection model to perform feature detection, and the face features are obtained.
The feature detection model is a model for detecting whether a current user wears glasses or not, and the model can be obtained by model training of a picture wearing glasses and a picture not wearing glasses. The face features are the features of wearing glasses and not wearing glasses.
S222: and inquiring a user image library corresponding to the user identifier based on the face characteristics, and acquiring a target user image corresponding to the face characteristics.
S223: and dynamically adjusting according to the historical font adjusting strategy corresponding to the target user image.
Two types of face images can be stored in the user image library, one type is a user image with glasses, and the other type is a user image without glasses. Specifically, if the face features are that the glasses are worn, the font adjustment is performed according to a historical font adjustment strategy corresponding to the first type face image in the user image library corresponding to the user identifier, and if the face features are that the glasses are not worn, the dynamic adjustment is performed according to a historical font adjustment strategy corresponding to the second type face image in the user image library corresponding to the user identifier. The target user image corresponding to the face features is determined based on the face features (namely whether to wear glasses), so that the situation that the same user wears glasses or does not wear glasses is distinguished, the influence of correcting the vision without adjusting fonts when the user wears the glasses is avoided, and the accuracy of font adjustment is improved.
It can be understood that if the historical adjustment strategies corresponding to the target user image are multiple, matching can be performed according to the current screen distance (which can be obtained through binocular ranging or a distance sensor), dynamic adjustment can be performed according to the historical adjustment strategies corresponding to the screen distance, a user does not need to manually adjust a proper font, the font can be directly reused, and the font adjustment efficiency is improved.
In this embodiment, the face features are obtained by inputting the successfully matched first face image into the feature detection model for feature detection, so that the target user image corresponding to the face features is determined based on the face features (i.e. whether to wear glasses), thereby distinguishing the situation that the same user wears glasses from the situation that the same user does not wear glasses, avoiding the influence of having to adjust fonts when the eyesight is corrected due to the fact that the glasses are worn, and improving the accuracy of font adjustment.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
In an embodiment, a font adjusting device based on face recognition is provided, and the font adjusting device based on face recognition corresponds to the font adjusting method based on face recognition in the embodiment one by one. As shown in fig. 9, the font adjustment device based on face recognition includes a first face image acquisition module 10, a face matching result acquisition module 20, a second face image acquisition module 30, an action analysis result acquisition module 40, and a font dynamic adjustment module 50. The functional modules are described in detail as follows:
The first face image acquisition module 10 is configured to acquire a first face image of a current time of the system by using a camera on the intelligent terminal, where the first face image corresponds to a user identifier.
The face matching result obtaining module 20 is configured to perform face matching on the first face image based on the historical face image in the face database corresponding to the user identifier, and obtain a face matching result.
The second face image acquisition module 30 is configured to continuously acquire a second face image corresponding to each moment by using the camera if the face matching result is that the matching fails.
And the action analysis result obtaining module 40 is configured to perform action analysis on the second face images corresponding to the two adjacent moments, and obtain an action analysis result.
And the font dynamic adjustment module 50 is used for dynamically adjusting the current font displayed on the screen of the intelligent terminal based on the action analysis result.
Specifically, the action analysis result acquisition module includes a feature point extraction unit, a target feature group acquisition unit, a feature point distance calculation unit, a feature point distance statistics unit, and an action analysis result acquisition unit.
And the feature point extraction unit is used for extracting feature points of the second face images corresponding to the two adjacent moments respectively to obtain a first feature set and a second feature set, wherein the first feature points in the first feature set correspond to the second feature points in the second feature set.
The target feature group acquisition unit is used for selecting any two first feature points in the first feature set as a first target feature group, and selecting two second feature points corresponding to the first target feature group in the second feature set as a second target feature group.
And the feature point distance calculation unit is used for calculating the distance of two first feature points in the at least one first target feature group to obtain at least one first feature point distance corresponding to the first feature set. And performing distance calculation on two second feature points in the at least one second target feature group to obtain at least one second feature point distance corresponding to the second feature set.
And the characteristic point distance statistics unit is used for respectively carrying out statistics on the first characteristic point distance and the second characteristic point distance to obtain a first distance value and a second distance value.
The action analysis result acquisition unit is used for carrying out comparison analysis with a preset distance threshold value based on the first distance value and the second distance value to acquire an action analysis result.
Specifically, the preset distance threshold includes a first distance threshold and a second distance threshold, and the action analysis result acquisition unit includes a first analysis subunit, a second analysis subunit, a third analysis subunit, and a fourth analysis subunit.
The first analysis subunit is configured to obtain an action analysis result that maintains balance if an absolute value of a distance difference between the first distance value and the second distance value is smaller than a first distance threshold.
And the second analysis subunit is used for judging whether the first distance value is smaller than the second distance value if the absolute value of the distance difference value between the first distance value and the second distance value is larger than the first distance threshold value.
And the third analysis subunit is used for acquiring an action analysis result close to the screen if the first distance value is smaller than the second distance value and the distance difference value between the second distance value and the first distance value is larger than the second distance threshold value.
And the fourth analysis subunit is used for acquiring an action analysis result far from the screen if the first distance value is larger than the second distance value and the distance difference value between the first distance value and the second distance value is larger than the second distance threshold value.
Specifically, the font adjusting device based on face recognition further comprises a current attribute obtaining module and a target font adjusting strategy obtaining module.
The current attribute acquisition module is used for acquiring the current screen distance and the current font size if the action analysis result is that balance is kept within the preset time length.
And the target font adjustment strategy acquisition module is used for taking the current screen distance and the current font size as target font adjustment strategies and storing the target font adjustment strategies and the user identification in an associated mode.
Specifically, the font dynamic adjustment module includes a first adjustment unit and a second adjustment unit.
And the first adjusting unit is used for reducing the current font displayed on the screen of the intelligent terminal according to a preset proportion if the action analysis result is far away from the screen.
And the second adjusting unit is used for amplifying the current fonts displayed on the screen of the intelligent terminal according to the preset proportion if the action analysis result is close to the screen.
Specifically, the font dynamic adjustment module comprises a moving speed acquisition unit, a font adjustment speed determination unit and a font dynamic adjustment unit.
And the moving speed acquisition unit is used for acquiring the moving speed of the intelligent terminal in real time.
And the font adjustment speed determining unit is used for determining the font adjustment speed according to the moving speed of the intelligent terminal.
And the font dynamic adjustment unit is used for dynamically adjusting the current font according to the font adjustment speed based on the action analysis result.
Specifically, the font adjusting device based on face recognition further comprises a face feature detection unit, a target user image acquisition unit and a dynamic adjusting unit.
And the face feature detection unit is used for inputting the first face image into the feature detection model to perform feature detection if the face matching result is that the matching is successful, so as to obtain the face features.
The target user image acquisition unit is used for inquiring a user image library corresponding to the user identifier based on the face characteristics and acquiring target user images corresponding to the face characteristics.
And the dynamic adjustment unit is used for dynamically adjusting according to the historical font adjustment strategy corresponding to the target user image.
For a specific limitation of the font adjustment device based on face recognition, reference may be made to the limitation of the font adjustment method based on face recognition hereinabove, and the description thereof will not be repeated here. The above-described font adjusting means based on face recognition may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 10. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a computer storage medium, an internal memory. The computer storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the computer storage media. The database of the computer device is used for storing data, such as images to be trained, generated or acquired during the process of executing the font adjustment method based on face recognition. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a font adjustment method based on face recognition.
In one embodiment, a computer device is provided, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the steps of the face recognition based font adjustment method in the above embodiment, such as steps S10-S50 shown in fig. 2, or the steps shown in fig. 3-8. Alternatively, the processor may implement the functions of each module/unit in this embodiment of the font adjusting device based on face recognition when executing the computer program, for example, the functions of each module/unit shown in fig. 9, which are not described herein again for the sake of avoiding repetition.
In an embodiment, a computer storage medium is provided, and a computer program is stored on the computer storage medium, where the computer program when executed by a processor implements the steps of the font adjustment method based on face recognition in the above embodiment, for example, steps S10-S50 shown in fig. 2, or steps shown in fig. 3-8, which are not repeated herein. Alternatively, the computer program when executed by the processor implements the functions of each module/unit in the embodiment of the font adjustment device based on face recognition, for example, the functions of each module/unit shown in fig. 9, which are not repeated here.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (8)

1. The font adjusting method based on face recognition is characterized by comprising the following steps of:
a camera on the intelligent terminal is adopted to collect a first face image of the current time of the system, and the first face image corresponds to a user identifier;
Performing face matching on the first face image based on the historical face image in the face database corresponding to the user identifier to obtain a face matching result;
if the face matching result is that the matching is failed, continuously acquiring a second face image corresponding to each moment by adopting the camera;
extracting feature points of the second face images corresponding to two adjacent moments respectively to obtain a first feature set and a second feature set, wherein the first feature points in the first feature set correspond to the second feature points in the second feature set;
selecting any two first feature points in the first feature set as a first target feature set, and selecting two second feature points in the second feature set corresponding to the first target feature set as a second target feature set;
performing distance calculation on two first feature points in at least one first target feature group to obtain at least one first feature point distance corresponding to the first feature set; performing distance calculation on two second feature points in at least one second target feature group to obtain at least one second feature point distance corresponding to the second feature set;
Respectively counting the first characteristic point distance and the second characteristic point distance to obtain a first distance value and a second distance value;
if the absolute value of the distance difference value between the first distance value and the second distance value is smaller than the first distance threshold value, obtaining an action analysis result of balance maintenance;
if the absolute value of the distance difference value between the first distance value and the second distance value is larger than the first distance threshold value, judging whether the first distance value is smaller than the second distance value or not;
if the first distance value is smaller than the second distance value and the distance difference between the second distance value and the first distance value is larger than a second distance threshold value, acquiring an action analysis result close to the screen;
if the first distance value is larger than the second distance value and the distance difference between the first distance value and the second distance value is larger than the second distance threshold, acquiring an action analysis result far away from a screen;
and dynamically adjusting the current font displayed on the screen of the intelligent terminal based on the action analysis result.
2. The face recognition-based font adjustment method according to claim 1, wherein after the action analysis result is obtained, the face recognition-based font adjustment method further comprises:
If the action analysis result is that balance is kept within the preset duration, the current screen distance and the current font size are obtained;
and taking the current screen distance and the current font size as target font adjustment strategies, and storing the target font adjustment strategies and the user identification in an associated mode.
3. The font adjustment method based on face recognition according to claim 1, wherein dynamically adjusting the current font displayed on the screen of the intelligent terminal based on the action analysis result comprises:
if the action analysis result is far away from the screen, reducing the current font displayed on the screen of the intelligent terminal according to a preset proportion;
and if the action analysis result is that the screen is close to the action analysis result, amplifying the current font displayed on the screen of the intelligent terminal according to a preset proportion.
4. The font adjustment method based on face recognition according to claim 1, wherein dynamically adjusting the current font displayed on the screen of the intelligent terminal based on the action analysis result comprises:
collecting the moving speed of the intelligent terminal in real time;
determining a font adjusting speed according to the moving speed of the intelligent terminal;
And dynamically adjusting the current font according to the font adjusting speed based on the action analysis result.
5. The face recognition-based font adjustment method according to claim 1, wherein after the face matching result is obtained, the face recognition-based font adjustment method further comprises;
if the face matching result is that the matching is successful, inputting the first face image into a feature detection model for feature detection to obtain face features;
inquiring a user image library corresponding to the user identifier based on the face characteristics, and acquiring a target user image corresponding to the face characteristics;
and dynamically adjusting according to the historical font adjustment strategy corresponding to the target user image.
6. A font adjusting device based on face recognition, comprising:
the system comprises a first face image acquisition module, a second face image acquisition module and a first face image acquisition module, wherein the first face image acquisition module is used for acquiring a first face image of the current time of the system by adopting a camera on an intelligent terminal, and the first face image corresponds to a user identifier;
the face matching result acquisition module is used for carrying out face matching on the first face image based on the historical face image in the face database corresponding to the user identifier to acquire a face matching result;
The second face image acquisition module is used for continuously acquiring a second face image corresponding to each moment by adopting the camera if the face matching result is that the matching fails;
the action analysis result acquisition module is used for extracting feature points of the second face images corresponding to two adjacent moments respectively to acquire a first feature set and a second feature set, wherein the first feature points in the first feature set correspond to the second feature points in the second feature set; selecting any two first feature points in the first feature set as a first target feature set, and selecting two second feature points in the second feature set corresponding to the first target feature set as a second target feature set; performing distance calculation on two first feature points in at least one first target feature group to obtain at least one first feature point distance corresponding to the first feature set; performing distance calculation on two second feature points in at least one second target feature group to obtain at least one second feature point distance corresponding to the second feature set; respectively counting the first characteristic point distance and the second characteristic point distance to obtain a first distance value and a second distance value; if the absolute value of the distance difference value between the first distance value and the second distance value is smaller than the first distance threshold value, obtaining an action analysis result of balance maintenance;
If the absolute value of the distance difference value between the first distance value and the second distance value is larger than the first distance threshold value, judging whether the first distance value is smaller than the second distance value or not;
if the first distance value is smaller than the second distance value and the distance difference between the second distance value and the first distance value is larger than a second distance threshold value, acquiring an action analysis result close to the screen;
if the first distance value is larger than the second distance value and the distance difference between the first distance value and the second distance value is larger than the second distance threshold, acquiring an action analysis result far away from a screen;
and the font dynamic adjustment module is used for dynamically adjusting the current font displayed on the screen of the intelligent terminal based on the action analysis result.
7. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the face recognition based font adjustment method according to any of claims 1 to 5 when the computer program is executed.
8. A computer storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the face recognition based font adjustment method according to any one of claims 1 to 5.
CN201910841750.4A 2019-09-06 2019-09-06 Font adjustment method, device, equipment and medium based on face recognition Active CN110765847B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910841750.4A CN110765847B (en) 2019-09-06 2019-09-06 Font adjustment method, device, equipment and medium based on face recognition
PCT/CN2019/116945 WO2021042518A1 (en) 2019-09-06 2019-11-11 Face recognition-based font adjustment method, apparatus, device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910841750.4A CN110765847B (en) 2019-09-06 2019-09-06 Font adjustment method, device, equipment and medium based on face recognition

Publications (2)

Publication Number Publication Date
CN110765847A CN110765847A (en) 2020-02-07
CN110765847B true CN110765847B (en) 2023-08-04

Family

ID=69330312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910841750.4A Active CN110765847B (en) 2019-09-06 2019-09-06 Font adjustment method, device, equipment and medium based on face recognition

Country Status (2)

Country Link
CN (1) CN110765847B (en)
WO (1) WO2021042518A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112213983B (en) * 2020-10-12 2021-10-08 安徽兴安电气设备股份有限公司 Secondary water supply equipment remote online monitoring system based on 5G communication
CN114625456B (en) * 2020-12-11 2023-08-18 腾讯科技(深圳)有限公司 Target image display method, device and equipment
CN112631485A (en) * 2020-12-15 2021-04-09 深圳市明源云科技有限公司 Zooming method and zooming device for display interface
CN114944040B (en) * 2022-05-27 2024-06-25 中国银行股份有限公司 Management method of self-service cash recycling machine, related device and computer storage medium
CN115499538B (en) * 2022-08-23 2023-08-22 广东以诺通讯有限公司 Screen display font adjusting method, device, storage medium and computer equipment
CN116110356B (en) * 2023-04-12 2023-08-01 浙江大学 Control method and system of underwater display system

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751209B (en) * 2008-11-28 2012-10-10 联想(北京)有限公司 Method and computer for adjusting screen display element
CN102752438B (en) * 2011-04-20 2014-08-13 中兴通讯股份有限公司 Method and device for automatically regulating terminal interface display
CN103000004B (en) * 2011-09-13 2014-09-17 三星电子(中国)研发中心 Monitoring method for visibility range from eyes to screen
CN103377643B (en) * 2012-04-26 2017-02-15 富泰华工业(深圳)有限公司 System and method for adjusting fonts
CN103176694A (en) * 2013-03-05 2013-06-26 广东欧珀移动通信有限公司 Mobile terminal font automatic regulation method and device
CN104239416A (en) * 2014-08-19 2014-12-24 北京奇艺世纪科技有限公司 User identification method and system
CN106648344B (en) * 2015-11-02 2019-03-01 重庆邮电大学 A kind of screen content method of adjustment and its equipment
CN107528972B (en) * 2017-08-11 2020-04-24 维沃移动通信有限公司 Display method and mobile terminal
CN107797664B (en) * 2017-10-27 2021-05-07 Oppo广东移动通信有限公司 Content display method and device and electronic device
CN108989571B (en) * 2018-08-15 2020-06-19 浙江大学滨海产业技术研究院 Adaptive font adjustment method and device for mobile phone character reading
CN109815804A (en) * 2018-12-19 2019-05-28 平安普惠企业管理有限公司 Exchange method, device, computer equipment and storage medium based on artificial intelligence

Also Published As

Publication number Publication date
CN110765847A (en) 2020-02-07
WO2021042518A1 (en) 2021-03-11

Similar Documents

Publication Publication Date Title
CN110765847B (en) Font adjustment method, device, equipment and medium based on face recognition
US11003893B2 (en) Face location tracking method, apparatus, and electronic device
US20230260321A1 (en) System And Method For Scalable Cloud-Robotics Based Face Recognition And Face Analysis
CN112364715B (en) Nuclear power operation abnormity monitoring method and device, computer equipment and storage medium
US10885638B2 (en) Hand detection and tracking method and device
CN109145771B (en) Face snapshot method and device
CN111222423A (en) Target identification method and device based on operation area and computer equipment
CN104537389A (en) Human face recognition method and terminal equipment
CN113780466B (en) Model iterative optimization method, device, electronic equipment and readable storage medium
CN112906529A (en) Face recognition light supplementing method and device, face recognition equipment and face recognition system
CN110610117A (en) Face recognition method, face recognition device and storage medium
CN111353364A (en) Dynamic face identification method and device and electronic equipment
CN116863522A (en) Acne grading method, device, equipment and medium
CN114698399A (en) Face recognition method and device and readable storage medium
CN111881740A (en) Face recognition method, face recognition device, electronic equipment and medium
CN110956133A (en) Training method of single character text normalization model, text recognition method and device
CN113569676B (en) Image processing method, device, electronic equipment and storage medium
CN114092515A (en) Target tracking detection method, device, equipment and medium for obstacle blocking
CN113421241A (en) Abnormal event reporting method and device, computer equipment and storage medium
CN110807403A (en) User identity identification method and device and electronic equipment
CN116645530A (en) Construction detection method, device, equipment and storage medium based on image comparison
CN110889357A (en) Underground cable fault detection method and device based on marked area
US20180260619A1 (en) Method of determining an amount, non-transitory computer-readable storage medium and information processing apparatus
CN115658525A (en) User interface checking method and device, storage medium and computer equipment
KR20190001873A (en) Apparatus for searching object and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant