CN110851032A - Display style adjustment method and device for target device - Google Patents

Display style adjustment method and device for target device Download PDF

Info

Publication number
CN110851032A
CN110851032A CN201911077857.2A CN201911077857A CN110851032A CN 110851032 A CN110851032 A CN 110851032A CN 201911077857 A CN201911077857 A CN 201911077857A CN 110851032 A CN110851032 A CN 110851032A
Authority
CN
China
Prior art keywords
user
target device
display style
information
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911077857.2A
Other languages
Chinese (zh)
Inventor
刘正阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201911077857.2A priority Critical patent/CN110851032A/en
Publication of CN110851032A publication Critical patent/CN110851032A/en
Priority to PCT/CN2020/126110 priority patent/WO2021088790A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The embodiment of the disclosure discloses a display style adjusting method and device for a target device, an electronic device and a computer readable medium. One embodiment of the method comprises: determining whether a face of a user of the target device is displayed in the target image; in response to determining that the face of the user is displayed in the target image, determining intention information of the user based on at least one of a distance between the target device and the user and the target image, wherein the intention information is used for representing adjustment intention of the user for a display style of the target device; adjusting a display style of the target device based on the intention information. This embodiment enables a more personalized and targeted style presentation.

Description

Display style adjustment method and device for target device
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a display style adjusting method and device for target equipment, electronic equipment and a computer readable medium.
Background
With the development of internet technology and the popularization of electronic devices such as mobile phones, people's daily life has changed dramatically. People can meet various demands such as traveling, entertainment, learning, shopping, and the like through various electronic devices and various applications installed thereon. The related applications usually require the user to adjust the display style according to the needs of the user.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a display style adjustment method, apparatus, electronic device, and computer-readable medium for a target device.
In a first aspect, some embodiments of the present disclosure provide a display style adjustment method for a target device, the method including: determining whether a face of a user of the target device is displayed in the target image; in response to determining that the face of the user is displayed in the target image, determining intention information of the user based on at least one of a distance between the target device and the user and the target image, wherein the intention information is used for representing adjustment intention of the user for a display style of the target device; adjusting a display style of the target device based on the intention information.
In a second aspect, some embodiments of the present disclosure provide a display style adjustment apparatus for a target device, including: a first determination unit configured to determine whether a face of a user of a target device is displayed in a target image; a second determination unit configured to determine intention information of the user based on at least one of a distance between the target device and the user and the target image in response to determining that the face of the user is displayed in the target image, the intention information being used for characterizing an adjustment intention of the user for the display style of the target device; an adjusting unit configured to adjust a display style of the target device based on the intention information.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method as described in any of the implementations of the first aspect.
In a fourth aspect, some embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, where the program when executed by a processor implements a method as described in any of the implementations of the first aspect.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: by determining the intention information of the user and adjusting the display style of the target device based on the intention information, more personalized and targeted style display is achieved, so that the degree of matching between the display style and the user is higher. In the process, the intention information of the user is determined based on the distance between the user and the target image, so that different scene requirements can be met.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 and 2 are schematic diagrams of an application scenario of a display style adjustment method for a target device according to some embodiments of the present disclosure;
FIG. 3 is a flow diagram of a display style adjustment method for a target device in accordance with some embodiments of the methods of the present disclosure;
fig. 4 is a schematic diagram of another application scenario of a display style adjustment method for a target device according to some embodiments of the present disclosure;
FIG. 5 is a flow diagram of further embodiments of a display style adjustment method for a target device according to the present disclosure;
fig. 6 is a schematic structural diagram of some embodiments of a display style adjustment apparatus for a target device according to the present disclosure;
FIG. 7 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 and 2 are schematic diagrams of an application scenario of a display style adjustment method for a target device according to some embodiments of the present disclosure.
The display style adjustment method for the target device of some embodiments of the present disclosure is generally performed by a terminal device.
The terminal device may be hardware or software. When the terminal device is hardware, it may be various electronic devices having a display screen and supporting display, including but not limited to a smart phone, a tablet computer, a vehicle-mounted terminal, and the like. When the terminal device is software, the terminal device can be installed in the electronic devices listed above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
According to actual needs, a user can use the terminal device to interact with the server through the network so as to receive or send messages and the like.
In the application scenario shown in fig. 1 and 2, the execution subject of the display style adjustment method for the target device may be a news application 102 installed on a smartphone 101. User a may browse news through news application 102. In this process, the news application 102 may capture an image of the face of user a through the camera 103 on the smartphone 101.
On this basis, with continued reference to fig. 2, for a facial image 201 of user a captured by a camera, news application 102 may utilize various face detection algorithms to determine whether user a's face is displayed therein. In response to determining that the face of the user a is displayed therein, based on the face image 201, the intention information 203 of the user a is determined. In the present application scenario, the intention information 203 of the user is obtained by inputting the face image 201 into the user intention recognition model 202 trained in advance. In the application scenario, taking the intention information 203 of the user as "the font size is enlarged by 2 times" as an example, the font size of the smartphone 101 may be enlarged based on the intention information 203, and the result is shown in the right side of fig. 1.
With continued reference to fig. 3, a flow 300 of some embodiments of a display style adjustment method for a target device in accordance with the present disclosure is shown. The display style adjusting method for the target device comprises the following steps of:
step 301 determines whether the target image shows the face of the user of the target device.
In some embodiments, the execution subject of the display style adjustment method for the target device may be the target device described above, or may be an application installed on the target device. The execution subject may first determine whether the target image shows the face of the user of the target device. The target image may be any image. The target image can be determined by specification or screening under certain conditions. For example, an image currently captured by a camera of the target device may be taken as the target image. As another example, an image that meets a condition (for example, that is newly captured and whose capturing time is not more than three seconds from the current time) may be selected from an image library (for example, "album") as the target image. The target device may be any electronic device having a display screen. As an example, an electronic device currently used by a user may be taken as the target device.
In practice, whether the face of the user of the target device is displayed in the target image may be determined in various ways. For example, whether the face of the user of the target device is displayed in the image may be determined by a method of face keypoint detection. For another example, whether the face of the user of the target device is displayed in the target image may be determined by way of image similarity calculation.
It should be noted that, here, the user of the target device does not set any limit to the user.
Step 302, in response to determining that the face of the user is displayed in the target image, determining intention information of the user based on at least one of a distance between the target device and the user and the target image.
In some embodiments, in response to determining that the face of the user is displayed in the target image, the execution subject may determine the intention information of the user based on at least one of a distance between the target device and the user and the target image. Wherein the intention information is used for representing the adjustment intention of the user for the display style of the target device. By way of example, intent information includes, but is not limited to: adjusting the relevant information of the font, the font size, the color, the display position and the like. For example, it may be "enlarge the font size", "lighten the color", and so on.
In some embodiments, as an example, the execution subject described above may determine intention information of the user based on the target image.
In an optional implementation of some embodiments, the target image may be input into a pre-trained user intention recognition model, resulting in intention information of the user. Wherein the user intention recognition model may be a trained artificial neural network. For example, a large number of training samples may be used to perform multiple iterative training on a multi-layer convolutional neural network (LSTM) (Long Short-Term Memory network), so as to obtain a user intention recognition model. The training samples comprise sample images and corresponding sample intention information. In practice, the sample intention information corresponding to the sample image can be obtained by manual labeling and the like. As an example, the sample image in the training sample may be used as an input of the multilayer convolutional neural network, and the sample intention information corresponding to the input sample image may be used as an expected output of the multilayer convolutional neural network to train the user intention recognition model. Specifically, the difference between the actual output and the sample intention information can be calculated based on various loss functions, and the parameters of the multilayer convolutional neural network are adjusted by using a random gradient descent method and the like until a preset condition is met, and the training is finished. Finally, the multi-layer convolutional neural network after training can be used as a user intention recognition model.
In an optional implementation manner of some embodiments, the target image is input into a pre-trained user feature extraction model, so as to obtain feature information corresponding to a face displayed in the target image. The feature information is used to describe various features (e.g., sex, age, occupation, etc.) of the face displayed in the target image. Wherein, as an example, the user feature extraction model may be a trained artificial neural network. As an example, a large number of training samples may be used to perform multiple iterative training on a multi-layer convolutional neural network (LSTM) (Long Short-Term Memory network), so as to obtain a user feature extraction model. The training sample comprises an image and marked characteristic information. On this basis, intention information of the user is determined based on the feature information. As an example, determining the intention information of the user based on the feature information may be implemented by a preset processing logic or by querying a corresponding correspondence table. For example, if the obtained feature information is "woman", the intention information of the user may be determined to be "adjust the color to pink" by querying the corresponding correspondence table.
In some embodiments, as an example, the execution subject may determine intention information of the user based on a distance between the target device and the user. The distance between the target device and the user can be obtained in various ways. For example, it may be detected by a distance sensor provided in the target device. In another example, the information may be obtained by manual input. On this basis, as an example, the correspondence table may be obtained by querying. The corresponding relation table stores a large number of distances and corresponding optimal display styles. That is, here, the optimal display style may be determined as the intention information of the user.
Further, in some embodiments, the execution subject may determine the intention information of the user based on the target image and a distance between the target device and the user, as an example. For a specific method for determining the distance between the target device and the user, reference may be made to descriptions in other optional manners of the present disclosure, and details are not described herein again. According to actual needs, the intention information of the user can be determined in various ways based on the target image and the distance between the target device and the user. For example, the first intention information and the second intention information may be derived based on the target image and the distance between the target device and the user, respectively. On the basis, the first intention information and the second intention information are fused to obtain the intention information of the user. Due to the combined use of the target image and the distance between the target device and the user, the determined intention information of the user is more accurate.
Step 303, adjusting the display style of the target device based on the intention information.
In some embodiments, the execution body may adjust a display style of the target device based on the intention information. For example, the user's intention information is "adjust color to pink", and the execution subject may adjust color to pink. Wherein, according to the different intention information, the display style may include but is not limited to at least one of the following: font, font size, color, display location, etc.
In an alternative implementation of some embodiments, the display font size of the target device may be adjusted based on the intent information.
In an alternative implementation of some embodiments, the candidate display style may be determined based on the intent information. Then, it is determined whether the candidate display style satisfies a preset condition. The preset condition may be various conditions for defining the display style according to actual needs. For example, the preset condition may be: whether the target device supports the candidate display style. Thereby avoiding the situation of displaying messy codes and the like caused by the non-support of the equipment. And adjusting the display style of the target device to the candidate display style in response to determining that the candidate display style meets the preset condition.
In an optional implementation manner of some embodiments, the method further includes: acquiring user feedback information after the display style is adjusted; adjusting the intention information based on the user feedback information to obtain corrected intention information; the user intention recognition model is updated based on the target image and the corrected intention information.
In these implementations, the user feedback information may be used to describe the user's feedback on the adjusted display style. For example, the user feedback information may be "satisfied", "unsatisfied", "too large font size", "too small font size", "too bright color", and the like. According to actual needs, the user feedback information after the display style adjustment can be acquired in various ways. For example, the user feedback information may be obtained in a manner of receiving a manual text input by the user. For another example, the voice input by the user may be received, and the voice may be analyzed to obtain the user feedback information. For another example, after the display style is adjusted, the face of the user may be photographed and the face image may be analyzed to obtain the user feedback information. On the basis, the intention information is adjusted based on the user feedback information, and corrected intention information is obtained. The specific adjustment mode can be determined according to actual needs. Finally, the user intention recognition model is updated based on the target image and the corrected intention information. Specifically, the target image and the correction intention information may be used as a new training sample. And training the user intention recognition model based on the new training sample.
With continued reference to fig. 4, a schematic diagram of another application scenario of a display style adjustment method for a target device is shown, according to some embodiments of the present disclosure.
In the application scenario, after the font size is enlarged, a face image 404 of the user can be captured by a camera, and the image analysis is performed on the face image 404 to obtain user feedback information 405. The user feedback information 405 is exemplified by "the font size is too large".
On this basis, the intention information 403 may be adjusted based on the user feedback information 405, resulting in corrected intention information 406. Continuing with the example of the user feedback information 405 "word size too large". The intention information 403 may be adjusted to "2 times the font size enlarged" to obtain the corrected intention information 406, for example, "1.5 times the font size enlarged". Finally, the target image 401 and the revision intention information 406 may be taken as a new training sample 407. The user intention recognition model 402 is trained based on the new training samples 407 to update the user intention recognition model 402. The intention information obtained by the user intention recognition model 402 is made more accurate.
The method provided by some embodiments of the present disclosure realizes more personalized and targeted style display by determining the intention information of the user and adjusting the display style of the target device based on the intention information, so that the degree of matching between the display style and the user is higher. In the process, the intention information of the user is determined based on the distance between the user and the target image, so that different scene requirements can be met.
With further reference to fig. 5, a flow 500 of further embodiments of a display style adjustment method for a target device is illustrated. The process 500 of the display style adjustment method for the target device includes the following steps:
step 501, determining whether a target image displays the face of the user of the target device.
Step 502, in response to determining that the face of the user is displayed in the target image, determining intent information of the user based on at least one of a distance between the target device and the user and the target image.
Wherein the intention information is used for representing the adjustment intention of the user for the display style of the target device.
In some embodiments, specific implementations of steps 501-502 and technical effects thereof may refer to those embodiments corresponding to fig. 3, and are not described herein again.
Step 503, adjusting the display style of the target device based on the intention information, comprising the following sub-steps:
step 5031, determining whether the target device is in a stationary state relative to the user and whether the duration of the stationary state exceeds a preset threshold.
In some embodiments, the executive may determine whether the target device is in a stationary state relative to the user. In practice, as an example, it may be determined whether the target device is in a stationary state with respect to the user by continuously taking a plurality of images and performing motion detection on the plurality of images. As an example, in a scene where the user is in a stationary state, it may also be detected whether the target device is in a stationary state with respect to the user by various sensors such as an acceleration sensor provided in the target device. If the target device is determined to be in a static state relative to the user, whether the duration of the static state exceeds a preset threshold value can be continuously determined. If it is determined that the target device is in a stationary state with respect to the user and the duration of the stationary state exceeds the preset threshold, the step 5032 may be continuously executed. Optionally, if it is determined that the duration of the target device in the non-stationary state or the stationary state with respect to the user does not exceed the preset threshold, the execution of the subsequent steps may be abandoned, and no display style adjustment is performed.
Step 5032, in response to determining that the target device is in a stationary state relative to the user and that the duration of the stationary state exceeds a preset threshold, adjusting the display style of the target device based on the intention information.
In some embodiments, on the basis of step 5031, in response to determining that the target device is in a stationary state with respect to the user and that the duration of the stationary state exceeds a preset threshold, the display style of the target device is adjusted based on the intention information. Therefore, the adjustment of the display mode can be triggered only when the target device is in a relatively stable state. And for some scenes that the target device is unstable relative to the user, for example, scenes that the mobile phone drops suddenly and the user leaves suddenly and the like, display style adjustment is avoided, so that processing resources can be avoided being occupied, and system overhead is saved.
As can be seen from fig. 5, compared with the description of some embodiments corresponding to fig. 3, the display style adjustment method for the target device in some embodiments corresponding to fig. 5 adds a step of determining the state of the target device with respect to the user before adjusting the display style of the target device. Therefore, the adjustment of the display style can be triggered only when the target device is in a relatively stable state, the occupation of processing resources is avoided, and the system overhead is saved.
With further reference to fig. 6, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of a display style adjustment apparatus 600 for a target device, which correspond to those of the method embodiments shown in fig. 3, and which may be particularly applied in various electronic devices.
As shown in fig. 6, the display style adjustment apparatus 600 for a target device of some embodiments includes: a first determining unit 601, a second determining unit 602 and an adjusting unit 603. Wherein the first determination unit 601 is configured to determine whether a face of a user of the target device is displayed in the target image. The second determining unit 602 is configured to determine intention information of the user based on at least one of a distance between the target device and the user and the target image in response to determining that the face of the user is displayed in the target image, the intention information being used for characterizing an adjustment intention of the user for a display style of the target device. The adjusting unit 603 is configured to adjust the display style of the target device based on the intention information.
In some optional implementations of some embodiments, the intention information is used to characterize the user's adjustment intention for the display font size of the target device; and the adjusting unit 603 is further configured to adjust the display font size of the target device based on the intention information.
In some optional implementations of some embodiments, the second determining unit 602 is further configured to: and inputting the target image into a pre-trained user intention recognition model to obtain intention information of the user.
In some optional implementations of some embodiments, the second determining unit 602 is further configured to: inputting a target image into a pre-trained user feature extraction model to obtain feature information corresponding to a face displayed in the target image; and adjusting the display style of the target device based on the characteristic information.
In some optional implementations of some embodiments, the adjusting unit 603 is further configured to: determining a candidate display style based on the intention information; determining whether the candidate display style meets a preset condition; and adjusting the display style of the target device to the candidate display style in response to determining that the candidate display style meets the preset condition.
In some optional implementations of some embodiments, the adjusting unit 603 is further configured to: determining whether the target equipment is in a static state relative to the user and whether the duration of the static state exceeds a preset threshold; in response to determining that the target device is in a stationary state with respect to the user and that the length of time in the stationary state exceeds a preset threshold, adjusting a display style of the target device based on the intent information.
In some optional implementations of some embodiments, the apparatus 600 further comprises: an acquisition unit (not shown in the figure), a correction unit (not shown in the figure) and an update unit (not shown in the figure). Wherein the acquisition unit is configured to: acquiring user feedback information after the display style is adjusted; the correction unit is configured to: adjusting the intention information based on the user feedback information to obtain corrected intention information; the update unit is configured to: the user intention recognition model is updated based on the target image and the corrected intention information.
In the embodiments, by determining the intention information of the user and adjusting the display style of the target device based on the intention information, more personalized and targeted style display is realized, so that the display style is matched with the user to a higher degree. In the process, the intention information of the user is determined based on the distance between the user and the target image, so that different scene requirements can be met.
Referring now to fig. 7, shown is a schematic diagram of an electronic device 700 suitable for use in implementing some embodiments of the present disclosure. The terminal device in some embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, electronic device 700 may include a processing means (e.g., central processing unit, graphics processor, etc.) 701 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from storage 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; a storage device 708 including, for example, a memory card; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 7 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network via communications means 709, or may be installed from storage 708, or may be installed from ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining whether a face of a user of the target device is displayed in the target image; in response to determining that the face of the user is displayed in the target image, determining intention information of the user based on at least one of a distance between the target device and the user and the target image, wherein the intention information is used for representing adjustment intention of the user for a display style of the target device; adjusting a display style of the target device based on the intention information.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes a first determining unit, a second determining unit, and an adjusting unit. Here, the names of these units do not constitute a limitation to the unit itself in some cases, and for example, the adjustment unit may also be described as a "unit that adjusts the display style of the target device based on the intention information".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
According to one or more embodiments of the present disclosure, there is provided a display style adjustment method for a target device, including: determining whether a face of a user of the target device is displayed in the target image; in response to determining that the face of the user is displayed in the target image, determining intention information of the user based on at least one of a distance between the target device and the user and the target image, wherein the intention information is used for representing adjustment intention of the user for a display style of the target device; adjusting a display style of the target device based on the intention information.
According to one or more embodiments of the present disclosure, the intention information is used for characterizing the adjustment intention of the user for the display font size of the target device; and adjusting a display style of the target device based on the intention information, including: and adjusting the display font size of the target device based on the intention information.
According to one or more embodiments of the present disclosure, determining intention information of a user based on at least one of a distance between a target device and the user and a target image includes: and inputting the target image into a pre-trained user intention recognition model to obtain intention information of the user.
According to one or more embodiments of the present disclosure, determining intention information of a user based on at least one of a distance between a target device and the user and a target image includes: inputting a target image into a pre-trained user feature extraction model to obtain feature information corresponding to a face displayed in the target image; and adjusting the display style of the target device based on the characteristic information.
According to one or more embodiments of the present disclosure, adjusting a display style of a target device based on intention information includes: determining a candidate display style based on the intention information; determining whether the candidate display style meets a preset condition; and adjusting the display style of the target device to the candidate display style in response to determining that the candidate display style meets the preset condition.
According to one or more embodiments of the present disclosure, adjusting a display style of a target device based on intention information includes: determining whether the target equipment is in a static state relative to the user and whether the duration of the static state exceeds a preset threshold; in response to determining that the target device is in a stationary state with respect to the user and that the length of time in the stationary state exceeds a preset threshold, adjusting a display style of the target device based on the intent information.
In accordance with one or more embodiments of the present disclosure, a method further comprises: acquiring user feedback information after the display style is adjusted; adjusting the intention information based on the user feedback information to obtain corrected intention information; the user intention recognition model is updated based on the target image and the corrected intention information.
According to one or more embodiments of the present disclosure, there is provided a display style adjustment apparatus for a target device, including: a first determination unit configured to determine whether a face of a user of a target device is displayed in a target image; a second determination unit configured to determine intention information of the user based on at least one of a distance between the target device and the user and the target image in response to determining that the face of the user is displayed in the target image, the intention information being used for characterizing an adjustment intention of the user for the display style of the target device; an adjusting unit configured to adjust a display style of the target device based on the intention information.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method as in any above.
According to one or more embodiments of the present disclosure, a computer-readable medium is provided, on which a computer program is stored, wherein the program, when executed by a processor, implements any of the methods described above.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (10)

1. A display style adjustment method for a target device, comprising:
determining whether a face of a user of the target device is displayed in a target image;
in response to determining that the target image has the user's face displayed therein, determining intent information of the user based on at least one of a distance between the target device and the user, the intent information characterizing an adjustment intent of the user to a display style of the target device;
adjusting a display style of the target device based on the intention information.
2. The method of claim 1, wherein the intent information is used to characterize the user's adjustment intent for the target device's display font size; and
the adjusting a display style of the target device based on the intention information includes:
and adjusting the display font size of the target equipment based on the intention information.
3. The method of claim 1, wherein the determining intent information of the user based on at least one of a distance between the target device and the user, the target image, comprises:
and inputting the target image into a pre-trained user intention recognition model to obtain intention information of the user.
4. The method of claim 1, wherein the determining intent information of the user based on at least one of a distance between the target device and the user, the target image, comprises:
inputting the target image into a pre-trained user feature extraction model to obtain feature information corresponding to the face displayed in the target image;
determining intention information of the user based on the feature information.
5. The method of claim 1, wherein the adjusting a display style of the target device based on the intent information comprises:
determining a candidate display style based on the intention information;
determining whether the candidate display style meets a preset condition;
adjusting the display style of the target device to the candidate display style in response to determining that the candidate display style satisfies the preset condition.
6. The method of claim 1, wherein the adjusting a display style of the target device based on the intent information comprises:
determining whether the target equipment is in a static state relative to the user and whether the duration of the static state exceeds a preset threshold;
in response to determining that the target device is in a stationary state relative to the user and that a length of time in the stationary state exceeds a preset threshold, adjusting a display style of the target device based on the intent information.
7. The method of claim 3, wherein the method further comprises:
acquiring user feedback information after the display style is adjusted;
adjusting the intention information based on the user feedback information to obtain corrected intention information;
updating the user intent recognition model based on the target image and the revised intent information.
8. A display style adjustment apparatus for a target device, comprising:
a first determination unit configured to determine whether a face of a user of the target device is displayed in a target image;
a second determination unit configured to determine intention information of the user based on at least one of a distance between the target device and the user and the target image in response to a determination that the face of the user is displayed in the target image, the intention information being used to characterize an adjustment intention of the user for a display style of the target device;
an adjusting unit configured to adjust a display style of the target device based on the intention information.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-7.
CN201911077857.2A 2019-11-06 2019-11-06 Display style adjustment method and device for target device Pending CN110851032A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911077857.2A CN110851032A (en) 2019-11-06 2019-11-06 Display style adjustment method and device for target device
PCT/CN2020/126110 WO2021088790A1 (en) 2019-11-06 2020-11-03 Display style adjustment method and apparatus for target device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911077857.2A CN110851032A (en) 2019-11-06 2019-11-06 Display style adjustment method and device for target device

Publications (1)

Publication Number Publication Date
CN110851032A true CN110851032A (en) 2020-02-28

Family

ID=69598691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911077857.2A Pending CN110851032A (en) 2019-11-06 2019-11-06 Display style adjustment method and device for target device

Country Status (2)

Country Link
CN (1) CN110851032A (en)
WO (1) WO2021088790A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507385A (en) * 2020-12-25 2021-03-16 北京字跳网络技术有限公司 Information display method and device and electronic equipment
WO2021088790A1 (en) * 2019-11-06 2021-05-14 北京字节跳动网络技术有限公司 Display style adjustment method and apparatus for target device
CN113138705A (en) * 2021-03-16 2021-07-20 青岛海尔空调器有限总公司 Method, device and equipment for adjusting display mode of display interface

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102045429A (en) * 2009-10-13 2011-05-04 华为终端有限公司 Method and equipment for adjusting displayed content
CN103458115A (en) * 2013-08-22 2013-12-18 Tcl通讯(宁波)有限公司 Processing method for automatically setting size of system font of mobile terminal and mobile terminal
CN103733605A (en) * 2011-07-01 2014-04-16 英特尔公司 Adaptive text font and image adjustments in smart handheld devices for improved usability
CN106126017A (en) * 2016-06-20 2016-11-16 北京小米移动软件有限公司 Intelligent identification Method, device and terminal unit
CN106201261A (en) * 2016-06-30 2016-12-07 捷开通讯(深圳)有限公司 A kind of mobile terminal and display picture adjusting method thereof
CN106529449A (en) * 2016-11-03 2017-03-22 英华达(上海)科技有限公司 Method for automatically adjusting the proportion of displayed image and its display apparatus
CN106778623A (en) * 2016-12-19 2017-05-31 珠海格力电器股份有限公司 A kind of terminal screen control method, device and electronic equipment
CN107968890A (en) * 2017-12-21 2018-04-27 广东欧珀移动通信有限公司 theme setting method, device, terminal device and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426021A (en) * 2015-12-21 2016-03-23 魅族科技(中国)有限公司 Method for displaying character and terminal
CN107491684A (en) * 2017-09-22 2017-12-19 广东巽元科技有限公司 A kind of screen control device and its control method based on recognition of face
CN109032345B (en) * 2018-07-04 2022-11-29 百度在线网络技术(北京)有限公司 Equipment control method, device, equipment, server and storage medium
CN110851032A (en) * 2019-11-06 2020-02-28 北京字节跳动网络技术有限公司 Display style adjustment method and device for target device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102045429A (en) * 2009-10-13 2011-05-04 华为终端有限公司 Method and equipment for adjusting displayed content
CN103733605A (en) * 2011-07-01 2014-04-16 英特尔公司 Adaptive text font and image adjustments in smart handheld devices for improved usability
CN103458115A (en) * 2013-08-22 2013-12-18 Tcl通讯(宁波)有限公司 Processing method for automatically setting size of system font of mobile terminal and mobile terminal
CN106126017A (en) * 2016-06-20 2016-11-16 北京小米移动软件有限公司 Intelligent identification Method, device and terminal unit
CN106201261A (en) * 2016-06-30 2016-12-07 捷开通讯(深圳)有限公司 A kind of mobile terminal and display picture adjusting method thereof
CN106529449A (en) * 2016-11-03 2017-03-22 英华达(上海)科技有限公司 Method for automatically adjusting the proportion of displayed image and its display apparatus
CN106778623A (en) * 2016-12-19 2017-05-31 珠海格力电器股份有限公司 A kind of terminal screen control method, device and electronic equipment
CN107968890A (en) * 2017-12-21 2018-04-27 广东欧珀移动通信有限公司 theme setting method, device, terminal device and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021088790A1 (en) * 2019-11-06 2021-05-14 北京字节跳动网络技术有限公司 Display style adjustment method and apparatus for target device
CN112507385A (en) * 2020-12-25 2021-03-16 北京字跳网络技术有限公司 Information display method and device and electronic equipment
CN113138705A (en) * 2021-03-16 2021-07-20 青岛海尔空调器有限总公司 Method, device and equipment for adjusting display mode of display interface
WO2022193698A1 (en) * 2021-03-16 2022-09-22 青岛海尔空调器有限总公司 Method and apparatus for adjusting presentation mode of display interface, and device

Also Published As

Publication number Publication date
WO2021088790A1 (en) 2021-05-14

Similar Documents

Publication Publication Date Title
CN110971969B (en) Video dubbing method and device, electronic equipment and computer readable storage medium
CN110162670B (en) Method and device for generating expression package
CN110298413B (en) Image feature extraction method and device, storage medium and electronic equipment
CN109993150B (en) Method and device for identifying age
CN110809189B (en) Video playing method and device, electronic equipment and computer readable medium
WO2021088790A1 (en) Display style adjustment method and apparatus for target device
CN109981787B (en) Method and device for displaying information
CN109961032B (en) Method and apparatus for generating classification model
CN110009059B (en) Method and apparatus for generating a model
CN111784712B (en) Image processing method, device, equipment and computer readable medium
CN110381352B (en) Virtual gift display method and device, electronic equipment and readable medium
CN111461967B (en) Picture processing method, device, equipment and computer readable medium
CN110008926B (en) Method and device for identifying age
CN111754613A (en) Image decoration method and device, computer readable medium and electronic equipment
CN111461965B (en) Picture processing method and device, electronic equipment and computer readable medium
CN110046571B (en) Method and device for identifying age
CN113628097A (en) Image special effect configuration method, image recognition method, image special effect configuration device and electronic equipment
CN110335237B (en) Method and device for generating model and method and device for recognizing image
CN112200183A (en) Image processing method, device, equipment and computer readable medium
CN110765304A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN112307393A (en) Information issuing method and device and electronic equipment
CN111586295B (en) Image generation method and device and electronic equipment
CN112418233A (en) Image processing method, image processing device, readable medium and electronic equipment
CN112214695A (en) Information processing method and device and electronic equipment
CN112309387A (en) Method and apparatus for processing information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200228

RJ01 Rejection of invention patent application after publication