WO2021088790A1 - Procédé et appareil de réglage de style d'affichage pour dispositif cible - Google Patents

Procédé et appareil de réglage de style d'affichage pour dispositif cible Download PDF

Info

Publication number
WO2021088790A1
WO2021088790A1 PCT/CN2020/126110 CN2020126110W WO2021088790A1 WO 2021088790 A1 WO2021088790 A1 WO 2021088790A1 CN 2020126110 W CN2020126110 W CN 2020126110W WO 2021088790 A1 WO2021088790 A1 WO 2021088790A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
target device
display style
intention information
intention
Prior art date
Application number
PCT/CN2020/126110
Other languages
English (en)
Chinese (zh)
Inventor
刘正阳
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Publication of WO2021088790A1 publication Critical patent/WO2021088790A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the embodiments of the present disclosure relate to the field of computer technology, and in particular to a method, apparatus, electronic device, and computer-readable medium for adjusting a display style of a target device.
  • the content of the present invention is used to introduce concepts in a brief form, and these concepts will be described in detail in the following specific embodiments.
  • the content of the present invention is not intended to identify the key features or essential features of the technical solution required to be protected, nor is it intended to be used to limit the scope of the technical solution required to be protected.
  • Some embodiments of the present disclosure propose a display style adjustment method, apparatus, electronic device, and computer-readable medium for a target device.
  • some embodiments of the present disclosure provide a display style adjustment method for a target device.
  • the method includes: determining whether a user's face of the target device is displayed in the target image; For the user’s face, determine the user’s intention information based on at least one of the distance between the target device and the user and the target image.
  • the intention information is used to characterize the user’s intention to adjust the display style of the target device; adjust the target device based on the intention information Display style.
  • some embodiments of the present disclosure provide a display style adjustment apparatus for a target device, including: a first determining unit configured to determine whether the target device's user's face is displayed in the target image; and second The determining unit is configured to, in response to determining that the user’s face is displayed in the target image, determine the user’s intention information based on at least one of the distance between the target device and the user and the target image, where the intention information is used to characterize the user’s The adjustment intention of the display style of the device; the adjustment unit is configured to adjust the display style of the target device based on the intention information.
  • some embodiments of the present disclosure provide an electronic device, including: one or more processors; a storage device, on which one or more programs are stored. When one or more programs are stored by one or more The processor executes, so that one or more processors implement the method described in any implementation manner of the first aspect.
  • some embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, wherein the program is executed by a processor to implement the method described in any implementation manner in the first aspect.
  • One of the above-mentioned various embodiments of the present disclosure has the following beneficial effects: by determining the user's intention information, and adjusting the display style of the target device based on the intention information, a more personalized and targeted style display is realized, so that the display The matching degree between the style and the user is higher.
  • determining the user's intention information based on the distance from the user and/or the target image, it can adapt to different scene requirements.
  • Figures 1 and 2 are schematic diagrams of an application scenario of a method for adjusting a display style of a target device according to some embodiments of the present disclosure.
  • FIG. 3 is a flowchart of a display style adjustment method for a target device according to some embodiments of the method of the present disclosure.
  • Fig. 4 is a schematic diagram of another application scenario of a method for adjusting a display style of a target device according to some embodiments of the present disclosure.
  • FIG. 5 is a flowchart of other embodiments of a method for adjusting a display style of a target device according to the present disclosure.
  • Fig. 6 is a schematic structural diagram of some embodiments of a display style adjustment apparatus for a target device according to the present disclosure.
  • FIG. 7 is a schematic structural diagram of an electronic device suitable for implementing some embodiments of the present disclosure.
  • Figures 1 and 2 are schematic diagrams of an application scenario of a method for adjusting a display style of a target device according to some embodiments of the present disclosure.
  • the display style adjustment method for the target device in some embodiments of the present disclosure is generally executed by the terminal device.
  • the terminal device can be hardware or software.
  • the terminal device may be various electronic devices with a display screen and supporting display, including but not limited to smart phones, tablet computers, vehicle-mounted terminals, and so on.
  • the terminal device is software, it can be installed in the electronic devices listed above. It can be implemented, for example, as multiple software or software modules for providing distributed services, or as a single software or software module. There is no specific limitation here.
  • users can use terminal devices to interact with the server through the network to receive or send messages.
  • the execution subject of the method for adjusting the display style of the target device may be the news application 102 installed on the smart phone 101.
  • User A can browse news through the news application 102.
  • the news application 102 can capture the facial image of the user A through the camera 103 on the smartphone 101.
  • the news application 102 can use various face detection algorithms to determine whether the face of user A is displayed therein.
  • the intention information 203 of the user A is determined.
  • the user's intention information 203 is obtained by inputting the facial image 201 into the pre-trained user intention recognition model 202.
  • the font size of the smartphone 101 can be enlarged based on the intention information 203, and the result is shown on the right side of FIG. 1.
  • FIG. 3 there is shown a process 300 of some embodiments of a method for adjusting a display style of a target device according to the present disclosure.
  • the method for adjusting the display style of the target device includes the following steps:
  • Step 301 Determine whether the face of the user of the target device is displayed in the target image.
  • the execution subject of the method for adjusting the display style of the target device may be the above-mentioned target device, or may be an application installed on the target device.
  • the above-mentioned execution subject may first determine whether the face of the user of the target device is displayed in the target image.
  • the target image can be any image.
  • the target image can be determined by designation or by filtering through certain conditions.
  • the image currently captured by the camera of the target device can be used as the target image.
  • an image that meets the conditions for example, the latest shot and the shooting time is no more than three seconds from the current time
  • an image library for example, "album"
  • the target device can be any electronic device with a display screen.
  • the electronic device currently used by the user can be used as the target device.
  • the method of detecting key points of the face can be used to determine whether the face of the user of the target device is displayed in the image.
  • the user of the target device does not constitute any restriction on the user.
  • Step 302 In response to determining that the user's face is displayed in the target image, determine the user's intention information based on at least one of the distance between the target device and the user and the target image.
  • the above-mentioned execution subject may determine the user's intention information based on at least one of the distance between the target device and the user and the target image.
  • the intention information is used to characterize the user's intention to adjust the display style of the target device.
  • the intent information includes but is not limited to: adjusting font, font size, color, display position, etc. related information. For example, it can be "enlarge font size", “lighten color”, and so on.
  • the above-mentioned execution subject may determine the user's intention information based on the target image.
  • the target image may be input to a pre-trained user intention recognition model to obtain user intention information.
  • the user intention recognition model may be a trained artificial neural network.
  • a large number of training samples can be used to train multi-layer convolutional neural networks and LSTM (Long Short-Term Memory) multiple iterations to obtain a user intention recognition model.
  • the training samples include sample images and corresponding sample intention information.
  • the sample intention information corresponding to the sample image can be obtained through manual labeling and other methods.
  • the sample image in the training sample can be used as the input of the multi-layer convolutional neural network
  • the sample intention information corresponding to the input sample image can be used as the expected output of the multi-layer convolutional neural network
  • the user intention recognition model can be trained.
  • the difference between the actual output and the sample intention information can be calculated based on various loss functions, and the parameters of the multilayer convolutional neural network can be adjusted by means of stochastic gradient descent, until the preset conditions are met, and the training ends.
  • the trained multi-layer convolutional neural network can be used as the user intention recognition model.
  • the target image is input into a pre-trained user feature extraction model to obtain feature information corresponding to the face displayed in the target image.
  • the feature information is used to describe various features of the face displayed in the target image (for example, gender, age, occupation, etc.).
  • the user feature extraction model may be a trained artificial neural network.
  • a large number of training samples can be used to train a multi-layer convolutional neural network and LSTM (Long Short-Term Memory) multiple iterations to obtain a user feature extraction model.
  • the training samples include images and labeled feature information. On this basis, the user's intention information is determined based on the characteristic information.
  • the user's intention information can be determined based on the characteristic information through preset processing logic or by querying the corresponding correspondence table. For example, if the obtained feature information is "female", the user's intention information can be determined as "adjust the color to pink” by querying the corresponding correspondence table.
  • the above-mentioned execution subject may determine the user's intention information based on the distance between the target device and the user.
  • the distance between the target device and the user can be obtained in a variety of ways. For example, it can be detected by a distance sensor set in the target device. For another example, it can also be obtained by manual input. On this basis, as an example, it can be obtained by querying the correspondence table. Among them, a large number of distances and corresponding best display styles are stored in the correspondence table. That is, here, the optimal display style can be determined as the user's intention information.
  • the above-mentioned execution subject may determine the user's intention information based on the target image and the distance between the target device and the user.
  • the user's intention information can be determined based on the target image and the distance between the target device and the user in a variety of ways.
  • the first intention information and the second intention information may be obtained based on the target image and the distance between the target device and the user, respectively.
  • the user's intention information is obtained by fusing the first intention information and the second intention information. Since the target image and the distance between the target device and the user are used in combination, the determined user's intention information is more accurate.
  • Step 303 Adjust the display style of the target device based on the intention information.
  • the above-mentioned execution subject may adjust the display style of the target device based on the intention information. For example, if the user's intention information is "adjust the color to pink", the above-mentioned execution subject may adjust the color to pink.
  • the display style may include but is not limited to at least one of the following: font, font size, color, display position, and so on.
  • the display font size of the target device may be adjusted based on the intent information.
  • the candidate display style may be determined based on the intent information. Then, it is determined whether the candidate display style satisfies a preset condition.
  • the preset conditions can be various conditions that limit the display style.
  • the preset condition may be: whether the target device supports the candidate display style. So as to avoid the situation of displaying garbled characters due to the unsupported device.
  • the display style of the target device is adjusted to the candidate display style.
  • the above method further includes: obtaining user feedback information after the display style is adjusted; adjusting the intent information based on the user feedback information to obtain the modified intent information; updating based on the target image and the modified intent information User intention recognition model.
  • the user feedback information can be used to describe the user's feedback on the adjusted display style.
  • the user feedback information may be "satisfied", “unsatisfied”, “font size is too large”, “font size is too small”, “color is too bright” and so on.
  • the user feedback information after the adjustment of the display style can be obtained in a variety of ways.
  • user feedback information can be obtained by receiving text manually input by the user.
  • the voice input by the user can be received, and the voice can be analyzed to obtain user feedback information.
  • an image of the user's face can be photographed, and the facial image can be analyzed to obtain user feedback information.
  • the intention information is adjusted based on the user feedback information, and the revised intention information is obtained.
  • the specific adjustment method can be determined according to actual needs.
  • the user intention recognition model is updated based on the target image and the revised intention information. Specifically, the target image and the revised intention information can be used as a new training sample. The user intention recognition model is trained based on the new training samples.
  • FIG. 4 shows a schematic diagram of another application scenario of the method for adjusting the display style of the target device according to some embodiments of the present disclosure.
  • the user's facial image 404 can be captured by a camera, and the facial image 404 can be image analyzed to obtain user feedback information 405.
  • the user feedback information 405 takes "the font size is too large" as an example.
  • the intention information 403 can be adjusted based on the user feedback information 405 to obtain the revised intention information 406.
  • the intention information 403 can be adjusted to "enlarge the font size by 2 times” to obtain the revised intention information 406, for example, "enlarge the font size by 1.5 times".
  • the target image 401 and the revised intention information 406 can be used as a new training sample 407.
  • the user intention recognition model 402 is trained based on the new training samples 407 to update the user intention recognition model 402. This makes the intention information obtained through the user intention recognition model 402 more accurate.
  • the method provided by some embodiments of the present disclosure realizes a more personalized and targeted style display by determining the user's intention information and adjusting the display style of the target device based on the intention information, so that the display style matches the user. The degree is higher. In this process, by determining the user's intention information based on the distance from the user and/or the target image, it can adapt to different scene requirements.
  • FIG. 5 shows a process 500 of other embodiments of the method for adjusting the display style of the target device.
  • the process 500 of the method for adjusting the display style of the target device includes the following steps:
  • Step 501 Determine whether the face of the user of the target device is displayed in the target image.
  • Step 502 in response to determining that the user's face is displayed in the target image, determine the user's intention information based on at least one of the distance between the target device and the user and the target image.
  • the intention information is used to characterize the user's intention to adjust the display style of the target device.
  • steps 501-502 and the technical effects brought by them can be referred to those embodiments corresponding to FIG. 3, which will not be repeated here.
  • Step 503 Adjusting the display style of the target device based on the intent information includes the following sub-steps:
  • Step 5031 Determine whether the target device is in a static state relative to the user, and whether the duration of the static state exceeds a preset threshold.
  • the executive body may determine whether the target device is in a stationary state relative to the user.
  • step 5032 may continue to be executed.
  • the subsequent steps may be abandoned and the display style adjustment is not performed.
  • Step 5032 in response to determining that the target device is in a static state relative to the user, and the duration of the static state exceeds a preset threshold, adjust the display style of the target device based on the intention information.
  • the display style of the target device in response to determining that the target device is in a static state relative to the user and the duration of the static state exceeds a preset threshold, is adjusted based on the intention information. Thereby, it can be ensured that the adjustment of the display style is triggered only when the target device is in a relatively stable state. For some scenes where the target device is unstable relative to the user, for example, the mobile phone suddenly falls, the user suddenly leaves, etc., avoiding display style adjustments, which can avoid occupying processing resources and saving system overhead.
  • the display style adjustment method for the target device in some embodiments corresponding to FIG. 5 adds a determination before adjusting the display style of the target device.
  • the steps of the target device's state relative to the user. it can be ensured that the display style adjustment is triggered only when the target device is in a relatively stable state, which avoids occupying processing resources and saves system overhead.
  • the present disclosure provides some embodiments of a display style adjustment apparatus 600 for a target device. These apparatus embodiments are the same as those of the methods shown in FIG. 3. Corresponding to the example, the device can be specifically applied to various electronic devices.
  • the apparatus 600 for adjusting the display style of the target device in some embodiments includes: a first determining unit 601, a second determining unit 602, and an adjusting unit 603.
  • the first determining unit 601 is configured to determine whether the face of the user of the target device is displayed in the target image.
  • the second determining unit 602 is configured to, in response to determining that the user's face is displayed in the target image, determine the user's intention information based on at least one of the distance between the target device and the user and the target image, where the intention information is used to characterize the user The intention to adjust the display style of the target device.
  • the adjustment unit 603 is configured to adjust the display style of the target device based on the intention information.
  • the intention information is used to characterize the user's intention to adjust the display font size of the target device; and the adjustment unit 603 is further configured to adjust the display font size of the target device based on the intention information.
  • the second determining unit 602 is further configured to input the target image into a pre-trained user intent recognition model to obtain user intent information.
  • the second determining unit 602 is further configured to: input the target image into a pre-trained user feature extraction model to obtain feature information corresponding to the face displayed in the target image; The information adjusts the display style of the target device.
  • the adjustment unit 603 is further configured to: determine a candidate display style based on the intent information; determine whether the candidate display style satisfies a preset condition; in response to determining that the candidate display style satisfies the preset condition To adjust the display style of the target device to the candidate display style.
  • the adjustment unit 603 is further configured to: determine whether the target device is in a static state relative to the user, and whether the duration of the static state exceeds a preset threshold; in response to determining that the target device is relatively When the user is in a static state and the duration of the static state exceeds a preset threshold, the display style of the target device is adjusted based on the intention information.
  • the apparatus 600 further includes: an acquisition unit (not shown in the figure), a correction unit (not shown in the figure), and an update unit (not shown in the figure).
  • the acquiring unit is configured to: acquire user feedback information after the display style is adjusted;
  • the correcting unit is configured to adjust the intent information based on the user feedback information to obtain the corrected intent information;
  • the updating unit is configured to: based on the target image and the correction The intent information updates the user intent recognition model.
  • FIG. 7 shows a schematic structural diagram of an electronic device 700 suitable for implementing some embodiments of the present disclosure.
  • the terminal devices in some embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals, etc. (E.g. car navigation terminals) and other mobile terminals and fixed terminals such as digital TVs, desktop computers, etc.
  • the electronic device shown in FIG. 7 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 700 may include a processing device (such as a central processing unit, a graphics processor, etc.) 701, which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 702 or from a storage device 708.
  • the program in the memory (RAM) 703 executes various appropriate actions and processing.
  • various programs and data required for the operation of the electronic device 700 are also stored.
  • the processing device 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704.
  • An input/output (I/O) interface 705 is also connected to the bus 704.
  • the following devices can be connected to the I/O interface 705: including input devices 706 such as touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, liquid crystal display (LCD), speakers, vibration An output device 707 such as a device; a storage device 708 such as a memory card; and a communication device 709.
  • the communication device 709 may allow the electronic device 700 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 7 shows an electronic device 700 having various devices, it should be understood that it is not required to implement or have all of the illustrated devices. It may alternatively be implemented or provided with more or fewer devices. Each block shown in FIG. 7 may represent one device, or may represent multiple devices as needed.
  • the process described above with reference to the flowchart may be implemented as a computer software program.
  • some embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 709, or installed from the storage device 708, or installed from the ROM 702.
  • the processing device 701 the above-mentioned functions defined in the methods of some embodiments of the present disclosure are executed.
  • the computer-readable medium described in some embodiments of the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable removable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable signal medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
  • the client and server can communicate with any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium.
  • Communication e.g., communication network
  • Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (for example, the Internet), and end-to-end networks (for example, ad hoc end-to-end networks), as well as any currently known or future research and development network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs.
  • the electronic device determines whether the face of the user of the target device is displayed in the target image; in response to determining the target The user’s face is displayed in the image. Based on at least one of the distance between the target device and the user and the target image, the user’s intention information is determined.
  • the intention information is used to characterize the user’s intention to adjust the display style of the target device; The information adjusts the display style of the target device.
  • the computer program code used to perform the operations of some embodiments of the present disclosure can be written in one or more programming languages or a combination thereof, the programming languages including object-oriented programming languages-such as Java, Smalltalk, C++ , Also includes conventional procedural programming languages-such as "C" language or similar programming languages.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user’s computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to Connect via the Internet).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagram may represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logical function Executable instructions.
  • the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
  • the units described in some embodiments of the present disclosure may be implemented in software or hardware.
  • the described unit may also be provided in the processor, for example, it may be described as: a processor includes a first determining unit, a second determining unit, and an adjusting unit. Wherein, the names of these units do not constitute a limitation on the unit itself under certain circumstances.
  • the adjustment unit can also be described as "a unit that adjusts the display style of the target device based on intent information.”
  • exemplary types of hardware logic components include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Product (ASSP), System on Chip (SOC), Complex Programmable Logical device (CPLD) and so on.
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • ASSP Application Specific Standard Product
  • SOC System on Chip
  • CPLD Complex Programmable Logical device
  • a display style adjustment method for a target device including: determining whether the face of the user of the target device is displayed in the target image; Based on at least one of the distance between the target device and the user and the target image, determine the user’s intention information.
  • the intention information is used to characterize the user’s intention to adjust the display style of the target device; adjust the target device’s intention based on the intention information. Display style.
  • the intention information is used to characterize the user's intention to adjust the display font size of the target device; and adjusting the display style of the target device based on the intention information includes: adjusting the display font size of the target device based on the intention information.
  • determining the user's intention information includes: inputting the target image into a pre-trained user intention recognition model to obtain User's intention information.
  • determining the user's intention information based on at least one of the distance between the target device and the user and the target image includes: inputting the target image into a pre-trained user feature extraction model to obtain The feature information corresponding to the face displayed in the target image; the display style of the target device is adjusted based on the feature information.
  • adjusting the display style of the target device based on the intent information includes: determining a candidate display style based on the intent information; determining whether the candidate display style satisfies a preset condition; Set the conditions to adjust the display style of the target device to the candidate display style.
  • adjusting the display style of the target device based on the intention information includes: determining whether the target device is in a static state relative to the user, and whether the duration of the static state exceeds a preset threshold; in response to determining the target The device is in a static state relative to the user, and the duration of the static state exceeds a preset threshold, and the display style of the target device is adjusted based on the intention information.
  • the method further includes: obtaining user feedback information after the display style is adjusted; adjusting the intent information based on the user feedback information to obtain revised intent information; and updating the user intent based on the target image and the revised intent information Identify the model.
  • a display style adjustment apparatus for a target device, including: a first determining unit configured to determine whether the face of a user of the target device is displayed in the target image; The second determining unit is configured to, in response to determining that the user’s face is displayed in the target image, determine the user’s intention information based on at least one of the distance between the target device and the user and the target image, where the intention information is used to characterize the user’s The adjustment intention of the display style of the target device; the adjustment unit is configured to adjust the display style of the target device based on the intention information.
  • an electronic device including: one or more processors; a storage device, on which one or more programs are stored, when one or more programs are stored by one or more One processor executes, so that one or more processors implement any of the above-mentioned methods.
  • a computer-readable medium having a computer program stored thereon, wherein the program is executed by a processor to implement any of the above-mentioned methods.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un procédé et un appareil de réglage de style d'affichage pour un dispositif cible, ainsi qu'un dispositif électronique et un support lisible par ordinateur. Le procédé consiste à déterminer si le visage d'un utilisateur d'un dispositif cible est affiché dans une image cible (301) ; en réponse à la détermination du fait que le visage de l'utilisateur est affiché dans l'image cible, à déterminer des informations d'intention de l'utilisateur sur la base de la distance entre le dispositif cible et l'utilisateur, et/ou de l'image cible (302), les informations d'intention étant utilisées pour représenter l'intention de réglage de l'utilisateur pour le style d'affichage du dispositif cible ; et à régler le style d'affichage du dispositif cible sur la base des informations d'intention (303). Un affichage de style plus personnalisé et ciblé est obtenu.
PCT/CN2020/126110 2019-11-06 2020-11-03 Procédé et appareil de réglage de style d'affichage pour dispositif cible WO2021088790A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911077857.2A CN110851032A (zh) 2019-11-06 2019-11-06 用于目标设备的显示样式调整方法和装置
CN201911077857.2 2019-11-06

Publications (1)

Publication Number Publication Date
WO2021088790A1 true WO2021088790A1 (fr) 2021-05-14

Family

ID=69598691

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/126110 WO2021088790A1 (fr) 2019-11-06 2020-11-03 Procédé et appareil de réglage de style d'affichage pour dispositif cible

Country Status (2)

Country Link
CN (1) CN110851032A (fr)
WO (1) WO2021088790A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110851032A (zh) * 2019-11-06 2020-02-28 北京字节跳动网络技术有限公司 用于目标设备的显示样式调整方法和装置
CN112507385B (zh) * 2020-12-25 2022-05-10 北京字跳网络技术有限公司 信息显示方法、装置和电子设备
CN113138705A (zh) * 2021-03-16 2021-07-20 青岛海尔空调器有限总公司 用于调整显示界面展示方式的方法、装置及设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426021A (zh) * 2015-12-21 2016-03-23 魅族科技(中国)有限公司 一种显示字符的方法以及终端
CN107491684A (zh) * 2017-09-22 2017-12-19 广东巽元科技有限公司 一种基于人脸识别的屏幕控制装置及其控制方法
CN109032345A (zh) * 2018-07-04 2018-12-18 百度在线网络技术(北京)有限公司 设备控制方法、装置、设备、服务端和存储介质
CN110851032A (zh) * 2019-11-06 2020-02-28 北京字节跳动网络技术有限公司 用于目标设备的显示样式调整方法和装置

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102045429B (zh) * 2009-10-13 2015-01-21 华为终端有限公司 一种调节显示内容的方法和设备
US20130002722A1 (en) * 2011-07-01 2013-01-03 Krimon Yuri I Adaptive text font and image adjustments in smart handheld devices for improved usability
CN103458115B (zh) * 2013-08-22 2016-04-13 Tcl通讯(宁波)有限公司 移动终端自动设置系统字体大小的处理方法及移动终端
CN106126017A (zh) * 2016-06-20 2016-11-16 北京小米移动软件有限公司 智能识别方法、装置和终端设备
CN106201261A (zh) * 2016-06-30 2016-12-07 捷开通讯(深圳)有限公司 一种移动终端及其显示画面调整方法
CN106529449A (zh) * 2016-11-03 2017-03-22 英华达(上海)科技有限公司 自动调整显示画面比例的方法及其显示装置
CN106778623A (zh) * 2016-12-19 2017-05-31 珠海格力电器股份有限公司 一种终端屏幕控制方法、装置及电子设备
CN107968890A (zh) * 2017-12-21 2018-04-27 广东欧珀移动通信有限公司 主题设置方法、装置、终端设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426021A (zh) * 2015-12-21 2016-03-23 魅族科技(中国)有限公司 一种显示字符的方法以及终端
CN107491684A (zh) * 2017-09-22 2017-12-19 广东巽元科技有限公司 一种基于人脸识别的屏幕控制装置及其控制方法
CN109032345A (zh) * 2018-07-04 2018-12-18 百度在线网络技术(北京)有限公司 设备控制方法、装置、设备、服务端和存储介质
CN110851032A (zh) * 2019-11-06 2020-02-28 北京字节跳动网络技术有限公司 用于目标设备的显示样式调整方法和装置

Also Published As

Publication number Publication date
CN110851032A (zh) 2020-02-28

Similar Documents

Publication Publication Date Title
US11455830B2 (en) Face recognition method and apparatus, electronic device, and storage medium
CN110162670B (zh) 用于生成表情包的方法和装置
WO2021088790A1 (fr) Procédé et appareil de réglage de style d'affichage pour dispositif cible
CN109993150B (zh) 用于识别年龄的方法和装置
CN112153460B (zh) 一种视频的配乐方法、装置、电子设备和存储介质
WO2021114979A1 (fr) Procédé et appareil d'affichage de page vidéo, dispositif électronique et support lisible par ordinateur
CN110046571B (zh) 用于识别年龄的方法和装置
CN111459364B (zh) 图标更新方法、装置和电子设备
CN111897950A (zh) 用于生成信息的方法和装置
CN111126159A (zh) 用于实时跟踪行人的方法、装置、电子设备和介质
CN111461965B (zh) 图片处理方法、装置、电子设备和计算机可读介质
WO2020221115A1 (fr) Procédé et dispositif d'affichage d'informations
CN112990176A (zh) 书写质量评价方法、装置和电子设备
CN112309387A (zh) 用于处理信息的方法和装置
CN111444813A (zh) 目标对象的属性分类的识别方法、装置、设备及存储介质
CN110765304A (zh) 图像处理方法、装置、电子设备及计算机可读介质
CN112307393A (zh) 信息发布方法、装置和电子设备
CN114550728B (zh) 用于标记说话人的方法、装置和电子设备
CN113033552B (zh) 文本识别方法、装置和电子设备
CN111062995B (zh) 生成人脸图像的方法、装置、电子设备和计算机可读介质
CN113409208A (zh) 图像处理方法、装置、设备及存储介质
CN112418233A (zh) 图像处理方法、装置、可读介质及电子设备
CN111652432A (zh) 用户属性信息的确定方法、装置、电子设备及存储介质
CN112214695A (zh) 信息处理方法、装置和电子设备
CN112364860B (zh) 字符识别模型的训练方法、装置和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20884955

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20884955

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05.09.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20884955

Country of ref document: EP

Kind code of ref document: A1