CN113126859A - Contextual model control method, contextual model control device, storage medium and terminal - Google Patents

Contextual model control method, contextual model control device, storage medium and terminal Download PDF

Info

Publication number
CN113126859A
CN113126859A CN201911417958.XA CN201911417958A CN113126859A CN 113126859 A CN113126859 A CN 113126859A CN 201911417958 A CN201911417958 A CN 201911417958A CN 113126859 A CN113126859 A CN 113126859A
Authority
CN
China
Prior art keywords
age
user
target
terminal
configuration parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911417958.XA
Other languages
Chinese (zh)
Inventor
孙永刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Original Assignee
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yulong Computer Telecommunication Scientific Shenzhen Co Ltd filed Critical Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority to CN201911417958.XA priority Critical patent/CN113126859A/en
Publication of CN113126859A publication Critical patent/CN113126859A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a contextual model control method, a contextual model control device, a storage medium and a terminal, wherein the contextual model control method comprises the following steps: the method comprises the steps of detecting the age of a user, determining a target age interval of which the age is located in a plurality of preset age intervals, wherein the age intervals are respectively associated with configuration parameters, the configuration parameters comprise the size of a graphical interface element, the size of each age interval and the size of the graphical interface element are in positive correlation, inquiring the target configuration parameters associated with the target age intervals, and configuring a terminal according to the target configuration parameters. According to the method and the device, the age of the user using the terminal at present is detected, the relevant terminal configuration parameters are matched according to the age of the user, the terminal is configured according to the matched configuration parameters, the terminal is set to be in the corresponding contextual model, the problem that the terminal needs the user to manually switch the contextual model in the related art is solved, and the viscosity of the user is improved.

Description

Contextual model control method, contextual model control device, storage medium and terminal
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for controlling a contextual model, a storage medium, and a terminal.
Background
With the fact that intelligent terminals such as mobile phones and tablet computers are deeper into daily life, the intelligent terminals are used as tools for entertainment, study and communication from children to adults and then to the old. In the related technology, the intelligent terminal is designed to match different contextual models for people of different ages, so that better experience is brought to users of any age. But the inventor finds that: in the related art, the switching of the contextual model of the intelligent terminal needs to be manually set by a user, and the problem of inflexible contextual model switching exists.
Disclosure of Invention
The embodiment of the application provides a contextual model control method and device, a computer storage medium and a terminal, and aims to solve the technical problem that in the prior art, a terminal needs a user to manually switch contextual models. The technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for controlling a contextual model, where the method includes:
detecting the age of the user;
determining a target age interval of a plurality of preset age intervals of the age; wherein each of the age intervals is associated with a configuration parameter, the configuration parameter comprises a size of a graphical interface element, and the size of the age interval and the size of the graphical interface element are in positive correlation;
inquiring target configuration parameters associated with the target age interval;
and configuring the terminal according to the target configuration parameters.
In a second aspect, an embodiment of the present application provides a device for controlling a contextual model, where the device includes:
the age detection module is used for detecting the age of the user;
the first determining module is used for determining that the age is in a target age interval in a plurality of preset age intervals; wherein each of the age intervals is associated with a configuration parameter, the configuration parameter comprises a size of a graphical interface element, and the size of the age interval and the size of the graphical interface element are in positive correlation;
the second query module is used for querying the target configuration parameters associated with the target age interval;
and the parameter configuration module is used for configuring the terminal according to the target configuration parameters.
In a third aspect, embodiments of the present application provide a computer storage medium having a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a fourth aspect, an embodiment of the present application provides a terminal, which may include: a memory and a processor; wherein the memory stores a computer program adapted to be loaded by the memory and to perform the above-mentioned method steps.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
when the scheme of the embodiment of the application is executed, the terminal detects the age of the user, determines that the age of the user is located in a target age interval of a plurality of preset age intervals, the plurality of age intervals are respectively associated with configuration parameters, the configuration parameters comprise the size of a graphical interface element, the size of the age interval and the size of the graphical interface element are positively correlated, the terminal inquires the target configuration parameters associated with the target age interval, and the terminal is configured according to the target configuration parameters. According to the method and the device, the age of the user using the terminal at present is detected, the relevant terminal configuration parameters are matched according to the age of the user, the terminal is configured according to the matched configuration parameters, the terminal is set to be in the corresponding contextual model, the problem that the terminal needs the user to manually switch the contextual model in the related art is solved, and the viscosity of the user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal provided in an embodiment of the present application;
FIG. 2 is a schematic structural diagram of an operating system and a user space provided in an embodiment of the present application;
FIG. 3 is an architectural diagram of the android operating system of FIG. 1;
FIG. 4 is an architecture diagram of the IOS operating system of FIG. 1;
fig. 5 is a schematic flowchart of a method for controlling a profile according to an embodiment of the present application;
fig. 6 is a schematic flowchart of another method for controlling a profile according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram illustrating an embodiment of a training age identification model;
FIG. 8 is a schematic diagram illustrating an effect of a user interface provided by an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating an effect of another user interface provided by an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating an effect of another user interface provided by an embodiment of the present application;
fig. 11 is a schematic structural diagram of a control device of a contextual model according to an embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the embodiments of the present application more obvious and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
Referring to fig. 1, a block diagram of a terminal according to an exemplary embodiment of the present application is shown. A terminal in the present application may include one or more of the following components: a processor 110, a memory 120, an input device 130, an output device 140, and a bus 150. The processor 110, memory 120, input device 130, and output device 140 may be connected by a bus 150.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the overall terminal using various interfaces and lines, performs various functions of the terminal and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120, and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-programmable gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a read-only Memory (ROM). Optionally, the memory 120 includes a non-transitory computer-readable medium. The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like, and the operating system may be an Android (Android) system (including a system based on Android system depth development), an IOS system developed by apple inc (including a system based on IOS system depth development), or other systems. The storage data area may also store data created by the terminal in use, such as a phonebook, audio-video data, chat log data, and the like.
Referring to fig. 2, the memory 120 may be divided into an operating system space, in which an operating system runs, and a user space, in which native and third-party applications run. In order to ensure that different third-party application programs can achieve a better operation effect, the operating system allocates corresponding system resources for the different third-party application programs. However, the requirements of different application scenarios in the same third-party application program on system resources are different, for example, in a local resource loading scenario, the third-party application program has a higher requirement on the disk reading speed; in the animation rendering scene, the third-party application program has a high requirement on the performance of the GPU. The operating system and the third-party application program are independent from each other, and the operating system cannot sense the current application scene of the third-party application program in time, so that the operating system cannot perform targeted system resource adaptation according to the specific application scene of the third-party application program.
In order to enable the operating system to distinguish a specific application scenario of the third-party application program, data communication between the third-party application program and the operating system needs to be opened, so that the operating system can acquire current scenario information of the third-party application program at any time, and further perform targeted system resource adaptation based on the current scenario.
Taking an operating system as an Android system as an example, programs and data stored in the memory 120 are as shown in fig. 3, and a Linux kernel layer 320, a system runtime library layer 340, an application framework layer 360, and an application layer 380 may be stored in the memory 120, where the Linux kernel layer 320, the system runtime library layer 340, and the application framework layer 360 belong to an operating system space, and the application layer 380 belongs to a user space. The Linux kernel layer 320 provides underlying drivers for various hardware of the terminal, such as a display driver, an audio driver, a camera driver, a bluetooth driver, a Wi-Fi driver, a power management, and the like. The system runtime library layer 340 provides a main feature support for the Android system through some C/C + + libraries. For example, the SQLite library provides support for a database, the OpenGL/ES library provides support for 3D drawing, the Webkit library provides support for a browser kernel, and the like. Also provided in the system runtime library layer 340 is an Android runtime library (Android runtime), which mainly provides some core libraries that can allow developers to write Android applications using the Java language. The application framework layer 360 provides various APIs that may be used in building an application, and developers may build their own applications by using these APIs, such as activity management, window management, view management, notification management, content provider, package management, session management, resource management, and location management. At least one application program runs in the application layer 380, and the application programs may be native application programs carried by the operating system, such as a contact program, a short message program, a clock program, a camera application, and the like; or a third-party application developed by a third-party developer, such as a game-like application, an instant messaging program, a photo beautification program, a shopping program, and the like.
Taking an operating system as an IOS system as an example, programs and data stored in the memory 120 are shown in fig. 4, and the IOS system includes: a Core operating system Layer 420(Core OS Layer), a Core Services Layer 440(Core Services Layer), a Media Layer 460(Media Layer), and a touchable Layer 480(Cocoa Touch Layer). The kernel operating system layer 420 includes an operating system kernel, drivers, and underlying program frameworks that provide functionality closer to hardware for use by program frameworks located in the core services layer 440. The core services layer 440 provides system services and/or program frameworks, such as a Foundation framework, an account framework, an advertisement framework, a data storage framework, a network connection framework, a geographic location framework, a motion framework, and so forth, as required by the application. The media layer 460 provides audiovisual related interfaces for applications, such as graphics image related interfaces, audio technology related interfaces, video technology related interfaces, audio video transmission technology wireless playback (AirPlay) interfaces, and the like. Touchable layer 480 provides various common interface-related frameworks for application development, and touchable layer 480 is responsible for user touch interaction operations on the terminal. Such as a local notification service, a remote push service, an advertising framework, a game tool framework, a messaging User Interface (UI) framework, a User Interface UIKit framework, a map framework, and so forth.
In the framework shown in FIG. 4, the framework associated with most applications includes, but is not limited to: a base framework in the core services layer 440 and a UIKit framework in the touchable layer 480. The base framework provides many basic object classes and data types, provides the most basic system services for all applications, and is UI independent. While the class provided by the UIKit framework is a basic library of UI classes for creating touch-based user interfaces, iOS applications can provide UIs based on the UIKit framework, so it provides an infrastructure for applications for building user interfaces, drawing, processing and user interaction events, responding to gestures, and the like.
The Android system can be referred to as a mode and a principle for realizing data communication between the third-party application program and the operating system in the IOS system, and details are not repeated herein.
The input device 130 is used for receiving input instructions or data, and the input device 130 includes, but is not limited to, a keyboard, a mouse, a camera, a microphone, or a touch device. The output device 140 is used for outputting instructions or data, and the output device 140 includes, but is not limited to, a display device, a speaker, and the like. In one example, the input device 130 and the output device 140 may be combined, and the input device 130 and the output device 140 are touch display screens for receiving touch operations of a user on or near the touch display screens by using any suitable object such as a finger, a touch pen, and the like, and displaying user interfaces of various applications. The touch display screen is generally provided at a front panel of the terminal. The touch display screen may be designed as a full-face screen, a curved screen, or a profiled screen. The touch display screen can also be designed to be a combination of a full-face screen and a curved-face screen, and a combination of a special-shaped screen and a curved-face screen, which is not limited in the embodiment of the present application.
In addition, those skilled in the art will appreciate that the configurations of the terminals illustrated in the above-described figures do not constitute limitations on the terminals, as the terminals may include more or less components than those illustrated, or some components may be combined, or a different arrangement of components may be used. For example, the terminal further includes a radio frequency circuit, an input unit, a sensor, an audio circuit, a wireless fidelity (WiFi) module, a power supply, a bluetooth module, and other components, which are not described herein again.
In the embodiment of the present application, the main body of execution of each step may be the terminal described above. Optionally, the execution subject of each step is an operating system of the terminal. The operating system may be an android system, an IOS system, or another operating system, which is not limited in this embodiment of the present application.
The terminal of the embodiment of the application can also be provided with a display device, and the display device can be various devices capable of realizing a display function, for example: a cathode ray tube display (CR), a light-emitting diode display (LED), an electronic ink panel, a Liquid Crystal Display (LCD), a Plasma Display Panel (PDP), and the like. The user can view information such as displayed text, images, video, etc. using the display device on the terminal 101. The terminal may be a smart phone, a tablet computer, a gaming device, an AR (Augmented Reality) device, an automobile, a data storage device, an audio playing device, a video playing device, a notebook, a desktop computing device, a wearable device such as an electronic watch, an electronic glasses, an electronic helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like.
In the terminal shown in fig. 1, the processor 110 may be configured to call an application program stored in the memory 120, and specifically execute the method for controlling the profile according to the embodiment of the present application.
When the scheme of the embodiment of the application is executed, the terminal detects the age of the user, determines that the age of the user is located in a target age interval of a plurality of preset age intervals, the plurality of age intervals are respectively associated with configuration parameters, the configuration parameters comprise the size of a graphical interface element, the size of the age interval and the size of the graphical interface element are positively correlated, the terminal inquires the target configuration parameters associated with the target age interval, and the terminal is configured according to the target configuration parameters. According to the method and the device, the age of the user using the terminal at present is detected, the relevant terminal configuration parameters are matched according to the age of the user, the terminal is configured according to the matched configuration parameters, the terminal is set to be in the corresponding contextual model, the problem that the terminal needs the user to manually switch the contextual model in the related art is solved, and the viscosity of the user is improved.
In the following method embodiments, for convenience of description, only the main execution body of each step is described as a terminal.
Please refer to fig. 5, which is a flowchart illustrating a method for controlling a profile according to an embodiment of the present disclosure. As shown in fig. 5, the method of the embodiment of the present application may include the steps of:
s501, detecting the age of the user.
Generally, there are various ways to detect the age of a user, which can be detected by a face image, a user voice, or user input information.
In a possible embodiment, the process of detecting the age of the user by means of the face image comprises: the terminal collects face images through the front camera, and the face images output the ages of the users through the identification processing of the age identification model of the trained images.
In a possible embodiment, the process of detecting the age of the user by means of the user's voice comprises: the user speaks an instruction, and the terminal outputs the age of the user through the identification processing of the trained age identification model of the voice.
In one possible embodiment, the process of detecting the age of the user by inputting information by the user comprises: and the user inputs the age of the user on a display screen interface of the terminal according to the prompt information, and the terminal identifies the age of the user according to the age input by the user.
And S502, determining a target age interval with the age in a plurality of preset age intervals.
The age intervals are divided in various ways, the age values in the age intervals do not overlap, and the embodiment of the present application does not limit the way of dividing the age intervals. The plurality of age intervals are respectively associated with configuration parameters, the configuration parameters comprise one or more of the size of a graphical interface element, the brightness of a display screen and the size of a prompt tone, and the configuration parameters are not limited in the embodiment of the application. The size of the age interval and the size of the graphical interface element are in positive correlation, namely the larger the age interval is, the larger the size of the corresponding graphical interface element is; the size of the age interval is in positive correlation with the brightness of the display screen, namely the larger the age interval is, the larger the corresponding brightness of the display screen is; the size of the age interval and the volume of the prompt tone are in positive correlation, namely the larger the age interval is, the larger the corresponding volume of the prompt tone is, and the terminal determines that the age of the user is in a specific age interval of a plurality of preset age intervals, namely a target age interval according to the detected age of the user.
S503, inquiring target configuration parameters related to the target age interval.
Generally, the terminal determines a target age interval where the age is located based on the age of the user, and then queries a target configuration parameter associated with the target age interval, where the target configuration parameter is one of configuration parameters associated with a plurality of age intervals. The target configuration parameters and the association relationship between the target age interval and the target configuration parameters may be stored locally in the terminal or in the network server.
In a possible implementation manner, the querying, by the terminal, the target configuration parameter associated with the target age interval includes: the terminal obtains a storage address of the target age interval through obtaining a key value of the target age interval and operation of a hash function based on the key value, the storage address stores target configuration parameters corresponding to the target age interval, and the terminal inquires the target configuration parameters related to the target age interval according to the storage address.
In a possible implementation manner, the querying, by the terminal, the target configuration parameter associated with the target age interval includes: and the target configuration parameter is configured with a label representing the age interval, the label of the target age interval is obtained, the target configuration parameter is searched in the age interval in which a plurality of configuration parameters are stored according to the label of the target age interval, and the target configuration parameter related to the target age interval is searched.
S504, the terminal is configured according to the target configuration parameters.
Generally, a terminal obtains a target configuration parameter from a storage space where a storage address is located, and invokes a corresponding application program to adjust corresponding settings according to the target configuration parameter. For example: the target configuration parameters include the size of the graphical interface element, and the size of the currently displayed graphical interface element is adjusted according to the size.
When the scheme of the embodiment of the application is executed, the terminal detects the age of the user, determines that the age of the user is located in a target age interval of a plurality of preset age intervals, the plurality of age intervals are respectively associated with configuration parameters, the configuration parameters comprise the size of a graphical interface element, the size of the age interval and the size of the graphical interface element are positively correlated, the terminal inquires the target configuration parameters associated with the target age interval, and the terminal is configured according to the target configuration parameters. According to the method and the device, the age of the user using the terminal at present is detected, the relevant terminal configuration parameters are matched according to the age of the user, the terminal is configured according to the matched configuration parameters, the terminal is set to be in the corresponding contextual model, the problem that the terminal needs the user to manually switch the contextual model in the related art is solved, and the viscosity of the user is improved.
Please refer to fig. 6, which is a flowchart illustrating another method for controlling a profile according to an embodiment of the present disclosure. As shown in fig. 6, the method of the embodiment of the present application may include the steps of:
s601, acquiring a face image of a user through a front camera.
Generally, when a screen is in a bright screen state, a terminal acquires a face image of a user through a front-facing camera. The front-facing camera collects images, actually, an image sensor collects optical signals, the optical signals are converted into electric signals, and the electric signals are transmitted to a digital signal processor to be processed to obtain face image data.
S602, identifying the user identity based on the face image, and judging whether the user identity is a preset user.
The preset users are commonly-used users which are pre-stored locally by the terminal, and the number of the preset users can be one or more. The terminal collects a face image of a user, the face image is identified through a face identification technology, and whether the identity of the user is a preset user is judged.
Generally, when a terminal recognizes a face image through a face recognition technology, the face image is subjected to face detection, face feature point extraction, face recognition and other processing. The face detection is used for judging whether a face image exists in a dynamic scene and a complex background and determining related parameters of a face, and comprises the following steps: the size of the face in the face image, the position of the face, the pose of the face, and the like. The face feature point extraction is used for positioning face key feature points including facial features, outlines and the like in a face image, after the face feature point extraction processing, the face key feature points are required to be subjected to face correction processing by affine transformation, and if the face correction processing is not carried out, the accuracy is not high when a non-frontal face is identified. And finally, inputting the corrected face into a face recognition network, wherein the face recognition network can be a classification network, and only a certain layer in the classification network needs to be extracted as a feature layer of the face, and the feature is the feature of the face. The face recognition is used for detecting whether the users are the same user, and after the face characteristic points are extracted, the Euclidean distance or cosine similarity is used for judging whether the users are the same user.
S603, when the user identity is not a preset user, extracting the feature vector of the face image.
The feature vector representation of the face image transforms the face image pixel space to another space, and the feature vector of each face image is equivalent to describing a change or characteristic between faces, which means that each face image can be represented as a linear combination of the feature vectors.
Generally, when the terminal identifies that the user identity is not a preset user, extracting a feature vector of a face image corresponding to the user.
S604, when the user identity is a preset user, determining configuration parameters associated with the preset user.
The configuration parameters comprise parameters such as the size of a graphical interface element, the brightness of a display screen, the volume of a prompt tone and the like. When the terminal identifies the user identity as a preset user, the terminal inquires configuration parameters associated with the preset identity in a local storage file. When the terminal stores the preset user, the age of the preset user is detected, the age interval where the age of the preset user is located is determined, the configuration parameters associated with the age interval are determined according to the age interval, and the preset user, the age of the preset user, the configuration parameters associated with the age of the preset user and the association relationship among the preset user, the preset user and the configuration parameters are also stored.
For example: a is a preset user, the age of A is 26 years, configuration parameters related to an age interval in which the age of A is 26 years are standard modes of the terminal, and fonts, icons and the like of a graphical interface are standard sizes. The method comprises the steps of detecting that a current user is A at a terminal, determining that the user A is a preset user, and inquiring related information of the pre-stored user A in a local storage file, wherein the related information comprises configuration parameters related to the preset user A.
And S605, configuring the terminal according to the configuration parameters associated with the preset user.
Generally, a terminal queries a configuration parameter associated with a preset user from a local storage file, and adjusts a response based on the configuration parameter. Such as: the configuration parameters are the font size and the icon size, and in the example of S604, the font size, the icon size, and the like of the terminal image interface are all adjusted to the standard size.
And S606, creating a face sample image set.
The face sample image set is a set of a large number of face sample images, the face sample images can be divided into face sample image sets of different age groups according to age intervals, and the classification of the face sample image sets is not limited in the embodiment of the application.
For example: the terminal creates a face sample image set of 3 different age intervals, and the age intervals are divided into: under 45 years old, 46 to 60 years old, and 61 to 100 years old. Each age section divides the face sample images by taking 1 year as granularity, the age sections below 45 years are divided into 1 year to 45 years, the age sections from 45 years to 60 years are divided into 46 years to 60 years, the age sections from 61 years to 100 years are divided into 61 years to 100 years, and the number of the face sample images of each age is approximately equal.
And S607, training the face sample image set to obtain an age identification model.
The age identification model is a neural network model, the neural network model is a system model simulating biological neurons, the neural network is formed by connecting a large number of neurons in the same form, one neuron can have a plurality of inputs and an output, the input of each neuron can be the output of other neurons or the input of the whole neural network, each neuron expresses a specific output function, namely an excitation function, and the connection between every two neurons comprises a connection strength, namely a weighted value acting on signals passing through the connection.
Generally, a neural network model is created, a large number of face sample images and user ages corresponding to the face sample images are input into the neural network model, and the relationship between each face sample image and the user age corresponding to the face sample image is output, so that the neural network model has the characteristic of identifying the user age according to the face image, and the model after training is the age identification model.
For example: as shown in fig. 7, a training process of the age recognition model includes that 3 face sample image sets are respectively a face sample image set 70, a face sample image set 73, and a face sample image set 76, an age interval corresponding to a face sample image in the face sample image set 70 is below 45 years old, an age interval corresponding to a face sample image in the face sample image set 73 is 46 to 60 years old, and an age interval corresponding to a face sample image in the face sample image set 76 is 61 to 100 years old. The face sample image set 70 is trained by an age model to obtain an age identification model 72, the face sample image set 73 is trained by the age model to obtain an age identification model 75, and the face sample image set 76 is trained by the age model to obtain an age identification model 78. Each age interval takes 1 year as granularity, the ages in the age intervals are divided, the number of face sample images of each age in the 3 face sample image sets is the same, for example, 3000, and the number of the face sample images is not limited in the embodiment of the application.
And S608, identifying the feature vector based on the age identification model to obtain the age of the user.
Generally, based on the feature vector of the face image extracted in S603, the feature vector of the face image is input to an age recognition model, and the age of the user corresponding to the face image is obtained.
For example: the user currently using the terminal is user B, the terminal extracts the feature vector of user B, and the feature vector of user B is input into the age recognition model 72, the age recognition model 75, and the age recognition model 78 in fig. 7, respectively, and the age of user B is 30 years.
And S609, determining that the age of the user is in a target age interval in a plurality of preset age intervals.
Generally, the terminal identifies the age of the user through an age identification model, divides an age interval in which the age of the user is located, and each user has a target age interval. In the example of S608, the age of the user B is 30 years, and the terminal determines that the age interval in which the age of the user B is located is 45 years or less.
S610, obtaining key values of the target age interval.
The key values are used for distinguishing different age intervals in the storage space, two parameters are set in the hash function, namely the key values and the function values, and the function values are obtained by calculating the key values through the hash function. And the terminal acquires a key value corresponding to the target age interval according to the determined target age interval.
And S611, determining the storage address of the target age interval through a hash function based on the key value.
The storage address is used for storing the address of the configuration parameter associated with each age interval, and the storage address is calculated by a hash function based on the key value of the target age interval.
And S612, inquiring target configuration parameters associated with the target age interval according to the storage address.
Wherein the target configuration parameters comprise one or more of target display screen brightness, target size of graphical interface elements, target volume of prompt tones, and graphical interface elements comprise icons, fonts, screen resolution, and the like. Generally, the terminal queries the target configuration parameters associated with the target age zone according to the address of the stored target configuration parameters.
For example: in the example of S608, the target age interval in which the user B is located is 45 years or less, and the target configuration parameters corresponding to the target age interval are normal prompt tone volume, standard icon size and standard font size, normal display screen brightness, and the like.
And S613, configuring the terminal according to the target configuration parameters.
Generally, a terminal obtains a target configuration parameter at a storage address and adjusts the setting accordingly.
For example: the configuration parameter is the size of the image interface element, and the number terminal is provided with 3 age intervals which are respectively: under the age of 45, between the age of 46 and the age of 60, and between the age of 61 and the age of 100, the sizes of the graphical interface elements corresponding to the age intervals are respectively a standard size, a first size and a second size. The standard size is minimum, the second size is maximum, and the first size is larger than the standard size and smaller than the second size. In the example of S608, the age of the user B is 30 years, the size of the corresponding graphical interface element is set to the standard size, the size of the graphical interface font and the size of the icon are both standard sizes, when the terminal detects that the age of the current user C is 50 years, the size of the corresponding graphical interface element is set to the first size, the size of the graphical interface font and the size of the icon are both the first size, when the terminal detects that the age of the current user C is 62 years, the size of the corresponding graphical interface element is set to the second size, and the size of the graphical interface font and the size of the icon are both the second size. Schematic diagrams of 3 sizes of the image interface element, namely a standard size, a first size and a second size are respectively shown in fig. 8, fig. 9 and fig. 10.
In a possible implementation manner, the terminal may detect whether the current display screen brightness is smaller than the target display screen brightness, further obtain the current remaining power of the terminal when detecting that the current display screen brightness is smaller than the target display screen brightness, and update the current display screen brightness of the terminal to the target display screen brightness when detecting that the remaining power is greater than the power threshold. In addition, when the brightness of the current display screen is detected to be larger than the brightness of the target display screen, the terminal does not adjust the brightness of the current display screen.
For example: in the example of S608, the target age interval in which the user B is located is 45 years or less, the target display screen brightness corresponding to the target age interval is the normal brightness, and the power threshold is set to 50%. When the terminal detects that the brightness of the current display screen is smaller than the normal brightness, and the residual electric quantity of the terminal is 60% and is larger than the electric quantity threshold value, the brightness of the display screen of the terminal is updated to be the normal brightness. And when the terminal detects that the brightness of the current display screen is smaller than the normal brightness, and the residual electric quantity of the terminal is 30% and is smaller than the electric quantity threshold value, the brightness of the display screen of the terminal is not adjusted.
When the scheme of the embodiment of the application is executed, the terminal detects the age of the user, determines that the age of the user is located in a target age interval of a plurality of preset age intervals, the plurality of age intervals are respectively associated with configuration parameters, the configuration parameters comprise the size of a graphical interface element, the size of the age interval and the size of the graphical interface element are positively correlated, the terminal inquires the target configuration parameters associated with the target age interval, and the terminal is configured according to the target configuration parameters. According to the method and the device, the age of the user using the terminal at present is detected, the relevant terminal configuration parameters are matched according to the age of the user, the terminal is configured according to the matched configuration parameters, the terminal is set to be in the corresponding contextual model, the problem that the terminal needs the user to manually switch the contextual model in the related art is solved, and the viscosity of the user is improved.
Fig. 11 is a schematic structural diagram of a control device of a contextual model according to an embodiment of the present application. The control means of the profile may be implemented as all or part of the terminal by software, hardware or a combination of both. The device includes:
an age detection module 1110 for detecting an age of a user;
a first determining module 1120, connected to the age detecting module 1110, for determining that the age is in a target age interval of a plurality of preset age intervals; wherein each of the age intervals is associated with a configuration parameter, the configuration parameter comprises a size of a graphical interface element, and the size of the age interval and the size of the graphical interface element are in positive correlation;
a second query module 1130, connected to the age detection module 1110 and the first determination module 1120, for querying the target configuration parameters associated with the target age interval;
the parameter configuration module 1140 is connected to the age detection module 1110, the first determination module 1120, and the second query module 1130, and is configured to configure the terminal according to the target configuration parameters.
Optionally, the age detection module 1110 comprises:
the first extraction unit is used for extracting the characteristic vector of the face image;
and the first identification unit is used for identifying the characteristic vector based on an age identification model to obtain the age of the user.
Optionally, the second query module 1130:
a first obtaining unit, configured to obtain a key value of the target age interval;
an address obtaining unit, configured to determine, based on the key value, a storage address of the target age interval through a hash function;
and the first query unit is used for querying the target configuration parameters associated with the target age interval according to the storage address.
Optionally, the age detection module 1110 further comprises:
a sample creating unit for creating a face sample image set; the face sample image set comprises a plurality of face sample images, and each face sample image is associated with a label representing age;
and the model training unit is used for training the face sample image set to obtain the age identification model.
When the scheme of the embodiment of the application is executed, the terminal detects the age of the user, determines that the age of the user is located in a target age interval of a plurality of preset age intervals, the plurality of age intervals are respectively associated with configuration parameters, the configuration parameters comprise the size of a graphical interface element, the size of the age interval and the size of the graphical interface element are positively correlated, the terminal inquires the target configuration parameters associated with the target age interval, and the terminal is configured according to the target configuration parameters. According to the method and the device, the age of the user using the terminal at present is detected, the relevant terminal configuration parameters are matched according to the age of the user, the terminal is configured according to the matched configuration parameters, the terminal is set to be in the corresponding contextual model, the problem that the terminal needs the user to manually switch the contextual model in the related art is solved, and the viscosity of the user is improved.
An embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executing the above method steps, and a specific execution process may refer to specific descriptions of the embodiments shown in fig. 5 and fig. 6, which are not described herein again.
The application also provides a terminal, which comprises a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (10)

1. A method for controlling a scene mode, the method comprising:
detecting the age of the user;
determining a target age interval of a plurality of preset age intervals of the age; wherein each of the age intervals is associated with a configuration parameter, the configuration parameter comprises a size of a graphical interface element, and the size of the age interval and the size of the graphical interface element are in positive correlation;
inquiring target configuration parameters associated with the target age interval;
and configuring the terminal according to the target configuration parameters.
2. The method of claim 1, wherein the target configuration parameters further comprise: target display screen brightness;
wherein, the configuring the terminal according to the target configuration parameter includes:
comparing the brightness of the current display screen with the brightness of the target display screen;
if the brightness of the target display screen is larger than the brightness of the current display screen, acquiring the residual electric quantity of the terminal;
and when the residual electric quantity is larger than the electric quantity threshold value, updating the brightness of the current display screen to the brightness of the target display screen.
3. The method of claim 1, wherein the querying the target configuration parameters associated with the target age interval comprises:
obtaining a key value of the target age interval;
determining a storage address of the target age interval through a hash function based on the key value;
and inquiring the target configuration parameters associated with the target age interval according to the storage address.
4. The method of claim 1, wherein detecting the age of the user is preceded by:
acquiring a face image of a user through a front camera;
and identifying the user identity based on the face image, and determining that the user identity is not a preset user.
5. The method of claim 4, wherein detecting the age of the user comprises:
extracting a feature vector of the face image;
and identifying the feature vector based on an age identification model to obtain the age of the user.
6. The method of claim 5, wherein before the identifying the feature vector based on the age identification model obtains the age of the user, further comprising:
creating a face sample image set; the face sample image set comprises a plurality of face sample images, and each face sample image is associated with a label representing age;
and training the face sample image set to obtain the age identification model.
7. The method of claim 4 or 5, further comprising:
when the user identity is identified as the preset user, determining configuration parameters associated with the preset user;
and configuring the terminal according to the configuration parameters associated with the preset user.
8. An apparatus for controlling a scene mode, the apparatus comprising:
the age detection module is used for detecting the age of the user;
the first determining module is used for determining that the age is in a target age interval in a plurality of preset age intervals; wherein each of the age intervals is associated with a configuration parameter, the configuration parameter comprises a size of a graphical interface element, and the size of the age interval and the size of the graphical interface element are in positive correlation;
the second query module is used for querying the target configuration parameters associated with the target age interval;
and the parameter configuration module is used for configuring the terminal according to the target configuration parameters.
9. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to carry out the method steps according to any one of claims 1 to 7.
10. A terminal, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1 to 7.
CN201911417958.XA 2019-12-31 2019-12-31 Contextual model control method, contextual model control device, storage medium and terminal Pending CN113126859A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911417958.XA CN113126859A (en) 2019-12-31 2019-12-31 Contextual model control method, contextual model control device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911417958.XA CN113126859A (en) 2019-12-31 2019-12-31 Contextual model control method, contextual model control device, storage medium and terminal

Publications (1)

Publication Number Publication Date
CN113126859A true CN113126859A (en) 2021-07-16

Family

ID=76769596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911417958.XA Pending CN113126859A (en) 2019-12-31 2019-12-31 Contextual model control method, contextual model control device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN113126859A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114378850A (en) * 2022-03-23 2022-04-22 北京优全智汇信息技术有限公司 Interaction method and system of customer service robot, electronic equipment and storage medium
CN117873631A (en) * 2024-03-12 2024-04-12 深圳市微克科技股份有限公司 Dial icon generation method, system and medium based on user crowd matching

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102760077A (en) * 2011-04-29 2012-10-31 广州三星通信技术研究有限公司 Method and device for self-adaptive application scene mode on basis of human face recognition
US20160085950A1 (en) * 2014-05-19 2016-03-24 Xiling CHEN Method and system for controlling usage rights and user modes based on face recognition
CN105574386A (en) * 2015-06-16 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Terminal mode management method and apparatus
CN106778623A (en) * 2016-12-19 2017-05-31 珠海格力电器股份有限公司 A kind of terminal screen control method, device and electronic equipment
CN110265040A (en) * 2019-06-20 2019-09-20 Oppo广东移动通信有限公司 Training method, device, storage medium and the electronic equipment of sound-groove model
CN110399813A (en) * 2019-07-10 2019-11-01 深兰科技(上海)有限公司 A kind of age recognition methods, device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102760077A (en) * 2011-04-29 2012-10-31 广州三星通信技术研究有限公司 Method and device for self-adaptive application scene mode on basis of human face recognition
US20160085950A1 (en) * 2014-05-19 2016-03-24 Xiling CHEN Method and system for controlling usage rights and user modes based on face recognition
CN105574386A (en) * 2015-06-16 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Terminal mode management method and apparatus
CN106778623A (en) * 2016-12-19 2017-05-31 珠海格力电器股份有限公司 A kind of terminal screen control method, device and electronic equipment
CN110265040A (en) * 2019-06-20 2019-09-20 Oppo广东移动通信有限公司 Training method, device, storage medium and the electronic equipment of sound-groove model
CN110399813A (en) * 2019-07-10 2019-11-01 深兰科技(上海)有限公司 A kind of age recognition methods, device, electronic equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114378850A (en) * 2022-03-23 2022-04-22 北京优全智汇信息技术有限公司 Interaction method and system of customer service robot, electronic equipment and storage medium
CN114378850B (en) * 2022-03-23 2022-07-01 北京优全智汇信息技术有限公司 Interaction method and system of customer service robot, electronic equipment and storage medium
CN117873631A (en) * 2024-03-12 2024-04-12 深圳市微克科技股份有限公司 Dial icon generation method, system and medium based on user crowd matching
CN117873631B (en) * 2024-03-12 2024-05-17 深圳市微克科技股份有限公司 Dial icon generation method, system and medium based on user crowd matching

Similar Documents

Publication Publication Date Title
CN107889070B (en) Picture processing method, device, terminal and computer readable storage medium
WO2020156199A1 (en) Application login method and device, terminal and storage medium
CN110413347B (en) Advertisement processing method and device in application program, storage medium and terminal
CN111767554B (en) Screen sharing method and device, storage medium and electronic equipment
CN112839223B (en) Image compression method, image compression device, storage medium and electronic equipment
CN111459586A (en) Remote assistance method, device, storage medium and terminal
CN111198724A (en) Application program starting method and device, storage medium and terminal
CN113268212A (en) Screen projection method and device, storage medium and electronic equipment
CN111176533A (en) Wallpaper switching method, device, storage medium and terminal
CN110702346B (en) Vibration testing method and device, storage medium and terminal
CN111127469A (en) Thumbnail display method, device, storage medium and terminal
CN113126859A (en) Contextual model control method, contextual model control device, storage medium and terminal
CN113163055B (en) Vibration adjusting method and device, storage medium and electronic equipment
CN112218130A (en) Control method and device for interactive video, storage medium and terminal
CN111866372A (en) Self-photographing method, device, storage medium and terminal
CN114285936A (en) Screen brightness adjusting method and device, storage medium and terminal
CN107765858B (en) Method, device, terminal and storage medium for determining face angle
CN112328339A (en) Notification message display method and device, storage medium and electronic equipment
CN111859999A (en) Message translation method, device, storage medium and electronic equipment
CN111538997A (en) Image processing method, image processing device, storage medium and terminal
CN112256354A (en) Application starting method and device, storage medium and electronic equipment
CN113495641A (en) Touch screen ghost point identification method and device, terminal and storage medium
WO2023273936A1 (en) Wallpaper setting method and apparatus, and storage medium and electronic device
CN111212411B (en) File transmission method, device, storage medium and terminal
CN115314588B (en) Background synchronization method, device, terminal, equipment, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210716

RJ01 Rejection of invention patent application after publication