WO2021088790A1 - Display style adjustment method and apparatus for target device - Google Patents
Display style adjustment method and apparatus for target device Download PDFInfo
- Publication number
- WO2021088790A1 WO2021088790A1 PCT/CN2020/126110 CN2020126110W WO2021088790A1 WO 2021088790 A1 WO2021088790 A1 WO 2021088790A1 CN 2020126110 W CN2020126110 W CN 2020126110W WO 2021088790 A1 WO2021088790 A1 WO 2021088790A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- target device
- display style
- intention information
- intention
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Definitions
- the embodiments of the present disclosure relate to the field of computer technology, and in particular to a method, apparatus, electronic device, and computer-readable medium for adjusting a display style of a target device.
- the content of the present invention is used to introduce concepts in a brief form, and these concepts will be described in detail in the following specific embodiments.
- the content of the present invention is not intended to identify the key features or essential features of the technical solution required to be protected, nor is it intended to be used to limit the scope of the technical solution required to be protected.
- Some embodiments of the present disclosure propose a display style adjustment method, apparatus, electronic device, and computer-readable medium for a target device.
- some embodiments of the present disclosure provide a display style adjustment method for a target device.
- the method includes: determining whether a user's face of the target device is displayed in the target image; For the user’s face, determine the user’s intention information based on at least one of the distance between the target device and the user and the target image.
- the intention information is used to characterize the user’s intention to adjust the display style of the target device; adjust the target device based on the intention information Display style.
- some embodiments of the present disclosure provide a display style adjustment apparatus for a target device, including: a first determining unit configured to determine whether the target device's user's face is displayed in the target image; and second The determining unit is configured to, in response to determining that the user’s face is displayed in the target image, determine the user’s intention information based on at least one of the distance between the target device and the user and the target image, where the intention information is used to characterize the user’s The adjustment intention of the display style of the device; the adjustment unit is configured to adjust the display style of the target device based on the intention information.
- some embodiments of the present disclosure provide an electronic device, including: one or more processors; a storage device, on which one or more programs are stored. When one or more programs are stored by one or more The processor executes, so that one or more processors implement the method described in any implementation manner of the first aspect.
- some embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, wherein the program is executed by a processor to implement the method described in any implementation manner in the first aspect.
- One of the above-mentioned various embodiments of the present disclosure has the following beneficial effects: by determining the user's intention information, and adjusting the display style of the target device based on the intention information, a more personalized and targeted style display is realized, so that the display The matching degree between the style and the user is higher.
- determining the user's intention information based on the distance from the user and/or the target image, it can adapt to different scene requirements.
- Figures 1 and 2 are schematic diagrams of an application scenario of a method for adjusting a display style of a target device according to some embodiments of the present disclosure.
- FIG. 3 is a flowchart of a display style adjustment method for a target device according to some embodiments of the method of the present disclosure.
- Fig. 4 is a schematic diagram of another application scenario of a method for adjusting a display style of a target device according to some embodiments of the present disclosure.
- FIG. 5 is a flowchart of other embodiments of a method for adjusting a display style of a target device according to the present disclosure.
- Fig. 6 is a schematic structural diagram of some embodiments of a display style adjustment apparatus for a target device according to the present disclosure.
- FIG. 7 is a schematic structural diagram of an electronic device suitable for implementing some embodiments of the present disclosure.
- Figures 1 and 2 are schematic diagrams of an application scenario of a method for adjusting a display style of a target device according to some embodiments of the present disclosure.
- the display style adjustment method for the target device in some embodiments of the present disclosure is generally executed by the terminal device.
- the terminal device can be hardware or software.
- the terminal device may be various electronic devices with a display screen and supporting display, including but not limited to smart phones, tablet computers, vehicle-mounted terminals, and so on.
- the terminal device is software, it can be installed in the electronic devices listed above. It can be implemented, for example, as multiple software or software modules for providing distributed services, or as a single software or software module. There is no specific limitation here.
- users can use terminal devices to interact with the server through the network to receive or send messages.
- the execution subject of the method for adjusting the display style of the target device may be the news application 102 installed on the smart phone 101.
- User A can browse news through the news application 102.
- the news application 102 can capture the facial image of the user A through the camera 103 on the smartphone 101.
- the news application 102 can use various face detection algorithms to determine whether the face of user A is displayed therein.
- the intention information 203 of the user A is determined.
- the user's intention information 203 is obtained by inputting the facial image 201 into the pre-trained user intention recognition model 202.
- the font size of the smartphone 101 can be enlarged based on the intention information 203, and the result is shown on the right side of FIG. 1.
- FIG. 3 there is shown a process 300 of some embodiments of a method for adjusting a display style of a target device according to the present disclosure.
- the method for adjusting the display style of the target device includes the following steps:
- Step 301 Determine whether the face of the user of the target device is displayed in the target image.
- the execution subject of the method for adjusting the display style of the target device may be the above-mentioned target device, or may be an application installed on the target device.
- the above-mentioned execution subject may first determine whether the face of the user of the target device is displayed in the target image.
- the target image can be any image.
- the target image can be determined by designation or by filtering through certain conditions.
- the image currently captured by the camera of the target device can be used as the target image.
- an image that meets the conditions for example, the latest shot and the shooting time is no more than three seconds from the current time
- an image library for example, "album"
- the target device can be any electronic device with a display screen.
- the electronic device currently used by the user can be used as the target device.
- the method of detecting key points of the face can be used to determine whether the face of the user of the target device is displayed in the image.
- the user of the target device does not constitute any restriction on the user.
- Step 302 In response to determining that the user's face is displayed in the target image, determine the user's intention information based on at least one of the distance between the target device and the user and the target image.
- the above-mentioned execution subject may determine the user's intention information based on at least one of the distance between the target device and the user and the target image.
- the intention information is used to characterize the user's intention to adjust the display style of the target device.
- the intent information includes but is not limited to: adjusting font, font size, color, display position, etc. related information. For example, it can be "enlarge font size", “lighten color”, and so on.
- the above-mentioned execution subject may determine the user's intention information based on the target image.
- the target image may be input to a pre-trained user intention recognition model to obtain user intention information.
- the user intention recognition model may be a trained artificial neural network.
- a large number of training samples can be used to train multi-layer convolutional neural networks and LSTM (Long Short-Term Memory) multiple iterations to obtain a user intention recognition model.
- the training samples include sample images and corresponding sample intention information.
- the sample intention information corresponding to the sample image can be obtained through manual labeling and other methods.
- the sample image in the training sample can be used as the input of the multi-layer convolutional neural network
- the sample intention information corresponding to the input sample image can be used as the expected output of the multi-layer convolutional neural network
- the user intention recognition model can be trained.
- the difference between the actual output and the sample intention information can be calculated based on various loss functions, and the parameters of the multilayer convolutional neural network can be adjusted by means of stochastic gradient descent, until the preset conditions are met, and the training ends.
- the trained multi-layer convolutional neural network can be used as the user intention recognition model.
- the target image is input into a pre-trained user feature extraction model to obtain feature information corresponding to the face displayed in the target image.
- the feature information is used to describe various features of the face displayed in the target image (for example, gender, age, occupation, etc.).
- the user feature extraction model may be a trained artificial neural network.
- a large number of training samples can be used to train a multi-layer convolutional neural network and LSTM (Long Short-Term Memory) multiple iterations to obtain a user feature extraction model.
- the training samples include images and labeled feature information. On this basis, the user's intention information is determined based on the characteristic information.
- the user's intention information can be determined based on the characteristic information through preset processing logic or by querying the corresponding correspondence table. For example, if the obtained feature information is "female", the user's intention information can be determined as "adjust the color to pink” by querying the corresponding correspondence table.
- the above-mentioned execution subject may determine the user's intention information based on the distance between the target device and the user.
- the distance between the target device and the user can be obtained in a variety of ways. For example, it can be detected by a distance sensor set in the target device. For another example, it can also be obtained by manual input. On this basis, as an example, it can be obtained by querying the correspondence table. Among them, a large number of distances and corresponding best display styles are stored in the correspondence table. That is, here, the optimal display style can be determined as the user's intention information.
- the above-mentioned execution subject may determine the user's intention information based on the target image and the distance between the target device and the user.
- the user's intention information can be determined based on the target image and the distance between the target device and the user in a variety of ways.
- the first intention information and the second intention information may be obtained based on the target image and the distance between the target device and the user, respectively.
- the user's intention information is obtained by fusing the first intention information and the second intention information. Since the target image and the distance between the target device and the user are used in combination, the determined user's intention information is more accurate.
- Step 303 Adjust the display style of the target device based on the intention information.
- the above-mentioned execution subject may adjust the display style of the target device based on the intention information. For example, if the user's intention information is "adjust the color to pink", the above-mentioned execution subject may adjust the color to pink.
- the display style may include but is not limited to at least one of the following: font, font size, color, display position, and so on.
- the display font size of the target device may be adjusted based on the intent information.
- the candidate display style may be determined based on the intent information. Then, it is determined whether the candidate display style satisfies a preset condition.
- the preset conditions can be various conditions that limit the display style.
- the preset condition may be: whether the target device supports the candidate display style. So as to avoid the situation of displaying garbled characters due to the unsupported device.
- the display style of the target device is adjusted to the candidate display style.
- the above method further includes: obtaining user feedback information after the display style is adjusted; adjusting the intent information based on the user feedback information to obtain the modified intent information; updating based on the target image and the modified intent information User intention recognition model.
- the user feedback information can be used to describe the user's feedback on the adjusted display style.
- the user feedback information may be "satisfied", “unsatisfied”, “font size is too large”, “font size is too small”, “color is too bright” and so on.
- the user feedback information after the adjustment of the display style can be obtained in a variety of ways.
- user feedback information can be obtained by receiving text manually input by the user.
- the voice input by the user can be received, and the voice can be analyzed to obtain user feedback information.
- an image of the user's face can be photographed, and the facial image can be analyzed to obtain user feedback information.
- the intention information is adjusted based on the user feedback information, and the revised intention information is obtained.
- the specific adjustment method can be determined according to actual needs.
- the user intention recognition model is updated based on the target image and the revised intention information. Specifically, the target image and the revised intention information can be used as a new training sample. The user intention recognition model is trained based on the new training samples.
- FIG. 4 shows a schematic diagram of another application scenario of the method for adjusting the display style of the target device according to some embodiments of the present disclosure.
- the user's facial image 404 can be captured by a camera, and the facial image 404 can be image analyzed to obtain user feedback information 405.
- the user feedback information 405 takes "the font size is too large" as an example.
- the intention information 403 can be adjusted based on the user feedback information 405 to obtain the revised intention information 406.
- the intention information 403 can be adjusted to "enlarge the font size by 2 times” to obtain the revised intention information 406, for example, "enlarge the font size by 1.5 times".
- the target image 401 and the revised intention information 406 can be used as a new training sample 407.
- the user intention recognition model 402 is trained based on the new training samples 407 to update the user intention recognition model 402. This makes the intention information obtained through the user intention recognition model 402 more accurate.
- the method provided by some embodiments of the present disclosure realizes a more personalized and targeted style display by determining the user's intention information and adjusting the display style of the target device based on the intention information, so that the display style matches the user. The degree is higher. In this process, by determining the user's intention information based on the distance from the user and/or the target image, it can adapt to different scene requirements.
- FIG. 5 shows a process 500 of other embodiments of the method for adjusting the display style of the target device.
- the process 500 of the method for adjusting the display style of the target device includes the following steps:
- Step 501 Determine whether the face of the user of the target device is displayed in the target image.
- Step 502 in response to determining that the user's face is displayed in the target image, determine the user's intention information based on at least one of the distance between the target device and the user and the target image.
- the intention information is used to characterize the user's intention to adjust the display style of the target device.
- steps 501-502 and the technical effects brought by them can be referred to those embodiments corresponding to FIG. 3, which will not be repeated here.
- Step 503 Adjusting the display style of the target device based on the intent information includes the following sub-steps:
- Step 5031 Determine whether the target device is in a static state relative to the user, and whether the duration of the static state exceeds a preset threshold.
- the executive body may determine whether the target device is in a stationary state relative to the user.
- step 5032 may continue to be executed.
- the subsequent steps may be abandoned and the display style adjustment is not performed.
- Step 5032 in response to determining that the target device is in a static state relative to the user, and the duration of the static state exceeds a preset threshold, adjust the display style of the target device based on the intention information.
- the display style of the target device in response to determining that the target device is in a static state relative to the user and the duration of the static state exceeds a preset threshold, is adjusted based on the intention information. Thereby, it can be ensured that the adjustment of the display style is triggered only when the target device is in a relatively stable state. For some scenes where the target device is unstable relative to the user, for example, the mobile phone suddenly falls, the user suddenly leaves, etc., avoiding display style adjustments, which can avoid occupying processing resources and saving system overhead.
- the display style adjustment method for the target device in some embodiments corresponding to FIG. 5 adds a determination before adjusting the display style of the target device.
- the steps of the target device's state relative to the user. it can be ensured that the display style adjustment is triggered only when the target device is in a relatively stable state, which avoids occupying processing resources and saves system overhead.
- the present disclosure provides some embodiments of a display style adjustment apparatus 600 for a target device. These apparatus embodiments are the same as those of the methods shown in FIG. 3. Corresponding to the example, the device can be specifically applied to various electronic devices.
- the apparatus 600 for adjusting the display style of the target device in some embodiments includes: a first determining unit 601, a second determining unit 602, and an adjusting unit 603.
- the first determining unit 601 is configured to determine whether the face of the user of the target device is displayed in the target image.
- the second determining unit 602 is configured to, in response to determining that the user's face is displayed in the target image, determine the user's intention information based on at least one of the distance between the target device and the user and the target image, where the intention information is used to characterize the user The intention to adjust the display style of the target device.
- the adjustment unit 603 is configured to adjust the display style of the target device based on the intention information.
- the intention information is used to characterize the user's intention to adjust the display font size of the target device; and the adjustment unit 603 is further configured to adjust the display font size of the target device based on the intention information.
- the second determining unit 602 is further configured to input the target image into a pre-trained user intent recognition model to obtain user intent information.
- the second determining unit 602 is further configured to: input the target image into a pre-trained user feature extraction model to obtain feature information corresponding to the face displayed in the target image; The information adjusts the display style of the target device.
- the adjustment unit 603 is further configured to: determine a candidate display style based on the intent information; determine whether the candidate display style satisfies a preset condition; in response to determining that the candidate display style satisfies the preset condition To adjust the display style of the target device to the candidate display style.
- the adjustment unit 603 is further configured to: determine whether the target device is in a static state relative to the user, and whether the duration of the static state exceeds a preset threshold; in response to determining that the target device is relatively When the user is in a static state and the duration of the static state exceeds a preset threshold, the display style of the target device is adjusted based on the intention information.
- the apparatus 600 further includes: an acquisition unit (not shown in the figure), a correction unit (not shown in the figure), and an update unit (not shown in the figure).
- the acquiring unit is configured to: acquire user feedback information after the display style is adjusted;
- the correcting unit is configured to adjust the intent information based on the user feedback information to obtain the corrected intent information;
- the updating unit is configured to: based on the target image and the correction The intent information updates the user intent recognition model.
- FIG. 7 shows a schematic structural diagram of an electronic device 700 suitable for implementing some embodiments of the present disclosure.
- the terminal devices in some embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals, etc. (E.g. car navigation terminals) and other mobile terminals and fixed terminals such as digital TVs, desktop computers, etc.
- the electronic device shown in FIG. 7 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
- the electronic device 700 may include a processing device (such as a central processing unit, a graphics processor, etc.) 701, which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 702 or from a storage device 708.
- the program in the memory (RAM) 703 executes various appropriate actions and processing.
- various programs and data required for the operation of the electronic device 700 are also stored.
- the processing device 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704.
- An input/output (I/O) interface 705 is also connected to the bus 704.
- the following devices can be connected to the I/O interface 705: including input devices 706 such as touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, liquid crystal display (LCD), speakers, vibration An output device 707 such as a device; a storage device 708 such as a memory card; and a communication device 709.
- the communication device 709 may allow the electronic device 700 to perform wireless or wired communication with other devices to exchange data.
- FIG. 7 shows an electronic device 700 having various devices, it should be understood that it is not required to implement or have all of the illustrated devices. It may alternatively be implemented or provided with more or fewer devices. Each block shown in FIG. 7 may represent one device, or may represent multiple devices as needed.
- the process described above with reference to the flowchart may be implemented as a computer software program.
- some embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
- the computer program may be downloaded and installed from the network through the communication device 709, or installed from the storage device 708, or installed from the ROM 702.
- the processing device 701 the above-mentioned functions defined in the methods of some embodiments of the present disclosure are executed.
- the computer-readable medium described in some embodiments of the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
- the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable removable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
- the computer-readable signal medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
- the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
- the client and server can communicate with any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium.
- Communication e.g., communication network
- Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (for example, the Internet), and end-to-end networks (for example, ad hoc end-to-end networks), as well as any currently known or future research and development network of.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
- the above-mentioned computer-readable medium carries one or more programs.
- the electronic device determines whether the face of the user of the target device is displayed in the target image; in response to determining the target The user’s face is displayed in the image. Based on at least one of the distance between the target device and the user and the target image, the user’s intention information is determined.
- the intention information is used to characterize the user’s intention to adjust the display style of the target device; The information adjusts the display style of the target device.
- the computer program code used to perform the operations of some embodiments of the present disclosure can be written in one or more programming languages or a combination thereof, the programming languages including object-oriented programming languages-such as Java, Smalltalk, C++ , Also includes conventional procedural programming languages-such as "C" language or similar programming languages.
- the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
- the remote computer can be connected to the user’s computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to Connect via the Internet).
- LAN local area network
- WAN wide area network
- each block in the flowchart or block diagram may represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logical function Executable instructions.
- the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.
- each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
- the units described in some embodiments of the present disclosure may be implemented in software or hardware.
- the described unit may also be provided in the processor, for example, it may be described as: a processor includes a first determining unit, a second determining unit, and an adjusting unit. Wherein, the names of these units do not constitute a limitation on the unit itself under certain circumstances.
- the adjustment unit can also be described as "a unit that adjusts the display style of the target device based on intent information.”
- exemplary types of hardware logic components include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Product (ASSP), System on Chip (SOC), Complex Programmable Logical device (CPLD) and so on.
- FPGA Field Programmable Gate Array
- ASIC Application Specific Integrated Circuit
- ASSP Application Specific Standard Product
- SOC System on Chip
- CPLD Complex Programmable Logical device
- a display style adjustment method for a target device including: determining whether the face of the user of the target device is displayed in the target image; Based on at least one of the distance between the target device and the user and the target image, determine the user’s intention information.
- the intention information is used to characterize the user’s intention to adjust the display style of the target device; adjust the target device’s intention based on the intention information. Display style.
- the intention information is used to characterize the user's intention to adjust the display font size of the target device; and adjusting the display style of the target device based on the intention information includes: adjusting the display font size of the target device based on the intention information.
- determining the user's intention information includes: inputting the target image into a pre-trained user intention recognition model to obtain User's intention information.
- determining the user's intention information based on at least one of the distance between the target device and the user and the target image includes: inputting the target image into a pre-trained user feature extraction model to obtain The feature information corresponding to the face displayed in the target image; the display style of the target device is adjusted based on the feature information.
- adjusting the display style of the target device based on the intent information includes: determining a candidate display style based on the intent information; determining whether the candidate display style satisfies a preset condition; Set the conditions to adjust the display style of the target device to the candidate display style.
- adjusting the display style of the target device based on the intention information includes: determining whether the target device is in a static state relative to the user, and whether the duration of the static state exceeds a preset threshold; in response to determining the target The device is in a static state relative to the user, and the duration of the static state exceeds a preset threshold, and the display style of the target device is adjusted based on the intention information.
- the method further includes: obtaining user feedback information after the display style is adjusted; adjusting the intent information based on the user feedback information to obtain revised intent information; and updating the user intent based on the target image and the revised intent information Identify the model.
- a display style adjustment apparatus for a target device, including: a first determining unit configured to determine whether the face of a user of the target device is displayed in the target image; The second determining unit is configured to, in response to determining that the user’s face is displayed in the target image, determine the user’s intention information based on at least one of the distance between the target device and the user and the target image, where the intention information is used to characterize the user’s The adjustment intention of the display style of the target device; the adjustment unit is configured to adjust the display style of the target device based on the intention information.
- an electronic device including: one or more processors; a storage device, on which one or more programs are stored, when one or more programs are stored by one or more One processor executes, so that one or more processors implement any of the above-mentioned methods.
- a computer-readable medium having a computer program stored thereon, wherein the program is executed by a processor to implement any of the above-mentioned methods.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (10)
- 一种用于目标设备的显示样式调整方法,包括:A method for adjusting the display style of a target device, including:确定目标图像中是否显示有所述目标设备的用户的面部;Determining whether the face of the user of the target device is displayed in the target image;响应于确定所述目标图像中显示有所述用户的面部,基于所述目标设备与所述用户之间的距离、所述目标图像中的至少一项,确定所述用户的意图信息,所述意图信息用于表征所述用户对于所述目标设备的显示样式的调整意图;In response to determining that the face of the user is displayed in the target image, determining the user's intention information based on at least one of the distance between the target device and the user and the target image, the The intention information is used to characterize the user's intention to adjust the display style of the target device;基于所述意图信息调整所述目标设备的显示样式。The display style of the target device is adjusted based on the intention information.
- 根据权利要求1所述的方法,其中,所述意图信息用于表征所述用户对于所述目标设备的显示字号的调整意图;以及The method according to claim 1, wherein the intention information is used to characterize the user's intention to adjust the display font size of the target device; and所述基于所述意图信息调整所述目标设备的显示样式,包括:The adjusting the display style of the target device based on the intention information includes:基于所述意图信息调整所述目标设备的显示字号。Adjusting the display font size of the target device based on the intention information.
- 根据权利要求1所述的方法,其中,所述基于所述目标设备与所述用户之间的距离、所述目标图像中的至少一项,确定所述用户的意图信息,包括:The method according to claim 1, wherein the determining the intention information of the user based on at least one of the distance between the target device and the user and the target image comprises:将所述目标图像输入预先训练的用户意图识别模型,得到所述用户的意图信息。The target image is input into a pre-trained user intention recognition model to obtain the user's intention information.
- 根据权利要求1所述的方法,其中,所述基于所述目标设备与所述用户之间的距离、所述目标图像中的至少一项,确定所述用户的意图信息,包括:The method according to claim 1, wherein the determining the intention information of the user based on at least one of the distance between the target device and the user and the target image comprises:将所述目标图像输入预先训练的用户特征提取模型,得到所述目标图像中所显示的面部对应的特征信息;Inputting the target image into a pre-trained user feature extraction model to obtain feature information corresponding to the face displayed in the target image;基于所述特征信息确定所述用户的意图信息。Determine the user's intention information based on the characteristic information.
- 根据权利要求1所述的方法,其中,所述基于所述意图信息调整所述目标设备的显示样式,包括:The method according to claim 1, wherein the adjusting the display style of the target device based on the intent information comprises:基于所述意图信息,确定候选显示样式;Determining a candidate display style based on the intention information;确定所述候选显示样式是否满足预设条件;Determining whether the candidate display style satisfies a preset condition;响应于确定所述候选显示样式满足所述预设条件,将所述目标设备的显示样式调整为所述候选显示样式。In response to determining that the candidate display style satisfies the preset condition, the display style of the target device is adjusted to the candidate display style.
- 根据权利要求1所述的方法,其中,所述基于所述意图信息调整所述目标设备的显示样式,包括:The method according to claim 1, wherein the adjusting the display style of the target device based on the intent information comprises:确定所述目标设备相对于所述用户是否处于静止状态,以及处于静止状态的时长是否超过预设阈值;Determining whether the target device is in a static state relative to the user, and whether the length of time in the static state exceeds a preset threshold;响应于确定所述目标设备相对于所述用户处于静止状态,以及处于静止状态的时长超过预设阈值,基于所述意图信息调整所述目标设备的显示样式。In response to determining that the target device is in a static state relative to the user, and the duration of the static state exceeds a preset threshold, the display style of the target device is adjusted based on the intention information.
- 根据权利要求3所述的方法,其中,所述方法还包括:The method according to claim 3, wherein the method further comprises:获取显示样式调整后的用户反馈信息;Obtain user feedback information after the display style is adjusted;基于所述用户反馈信息对所述意图信息进行调整,得到修正意图信息;Adjusting the intention information based on the user feedback information to obtain revised intention information;基于所述目标图像和所述修正意图信息更新所述用户意图识别模型。The user intention recognition model is updated based on the target image and the revised intention information.
- 一种用于目标设备的显示样式调整装置,包括:A display style adjustment device for target equipment, including:第一确定单元,被配置成确定目标图像中是否显示有所述目标设备的用户的面部;The first determining unit is configured to determine whether the face of the user of the target device is displayed in the target image;第二确定单元,被配置成响应于确定所述目标图像中显示有所述用户的面部,基于所述目标设备与所述用户之间的距离、所述目标图像中的至少一项,确定所述用户的意图信息,所述意图信息用于表征所述用户对于所述目标设备的显示样式的调整意图;The second determining unit is configured to, in response to determining that the face of the user is displayed in the target image, determine the target image based on at least one of the distance between the target device and the user and the target image. The user’s intention information, where the intention information is used to characterize the user’s intention to adjust the display style of the target device;调整单元,被配置成基于所述意图信息调整所述目标设备的显示样式。The adjustment unit is configured to adjust the display style of the target device based on the intention information.
- 一种电子设备,包括:An electronic device including:一个或多个处理器;One or more processors;存储装置,其上存储有一个或多个程序,A storage device on which one or more programs are stored,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-7中任一所述的方法。When the one or more programs are executed by the one or more processors, the one or more processors implement the method according to any one of claims 1-7.
- 一种计算机可读介质,其上存储有计算机程序,其中,所述程序被处理器执行时实现如权利要求1-7中任一所述的方法。A computer readable medium having a computer program stored thereon, wherein the program is executed by a processor to implement the method according to any one of claims 1-7.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911077857.2A CN110851032A (en) | 2019-11-06 | 2019-11-06 | Display style adjustment method and device for target device |
CN201911077857.2 | 2019-11-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021088790A1 true WO2021088790A1 (en) | 2021-05-14 |
Family
ID=69598691
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/126110 WO2021088790A1 (en) | 2019-11-06 | 2020-11-03 | Display style adjustment method and apparatus for target device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110851032A (en) |
WO (1) | WO2021088790A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110851032A (en) * | 2019-11-06 | 2020-02-28 | 北京字节跳动网络技术有限公司 | Display style adjustment method and device for target device |
CN112507385B (en) * | 2020-12-25 | 2022-05-10 | 北京字跳网络技术有限公司 | Information display method and device and electronic equipment |
CN113138705A (en) * | 2021-03-16 | 2021-07-20 | 青岛海尔空调器有限总公司 | Method, device and equipment for adjusting display mode of display interface |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105426021A (en) * | 2015-12-21 | 2016-03-23 | 魅族科技(中国)有限公司 | Method for displaying character and terminal |
CN107491684A (en) * | 2017-09-22 | 2017-12-19 | 广东巽元科技有限公司 | A kind of screen control device and its control method based on recognition of face |
CN109032345A (en) * | 2018-07-04 | 2018-12-18 | 百度在线网络技术(北京)有限公司 | Apparatus control method, device, equipment, server-side and storage medium |
CN110851032A (en) * | 2019-11-06 | 2020-02-28 | 北京字节跳动网络技术有限公司 | Display style adjustment method and device for target device |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102045429B (en) * | 2009-10-13 | 2015-01-21 | 华为终端有限公司 | Method and equipment for adjusting displayed content |
US20130002722A1 (en) * | 2011-07-01 | 2013-01-03 | Krimon Yuri I | Adaptive text font and image adjustments in smart handheld devices for improved usability |
CN103458115B (en) * | 2013-08-22 | 2016-04-13 | Tcl通讯(宁波)有限公司 | The processing method of mobile terminal automatic capture system font size and mobile terminal |
CN106126017A (en) * | 2016-06-20 | 2016-11-16 | 北京小米移动软件有限公司 | Intelligent identification Method, device and terminal unit |
CN106201261A (en) * | 2016-06-30 | 2016-12-07 | 捷开通讯(深圳)有限公司 | A kind of mobile terminal and display picture adjusting method thereof |
CN106529449A (en) * | 2016-11-03 | 2017-03-22 | 英华达(上海)科技有限公司 | Method for automatically adjusting the proportion of displayed image and its display apparatus |
CN106778623A (en) * | 2016-12-19 | 2017-05-31 | 珠海格力电器股份有限公司 | A kind of terminal screen control method, device and electronic equipment |
CN107968890A (en) * | 2017-12-21 | 2018-04-27 | 广东欧珀移动通信有限公司 | theme setting method, device, terminal device and storage medium |
-
2019
- 2019-11-06 CN CN201911077857.2A patent/CN110851032A/en active Pending
-
2020
- 2020-11-03 WO PCT/CN2020/126110 patent/WO2021088790A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105426021A (en) * | 2015-12-21 | 2016-03-23 | 魅族科技(中国)有限公司 | Method for displaying character and terminal |
CN107491684A (en) * | 2017-09-22 | 2017-12-19 | 广东巽元科技有限公司 | A kind of screen control device and its control method based on recognition of face |
CN109032345A (en) * | 2018-07-04 | 2018-12-18 | 百度在线网络技术(北京)有限公司 | Apparatus control method, device, equipment, server-side and storage medium |
CN110851032A (en) * | 2019-11-06 | 2020-02-28 | 北京字节跳动网络技术有限公司 | Display style adjustment method and device for target device |
Also Published As
Publication number | Publication date |
---|---|
CN110851032A (en) | 2020-02-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11455830B2 (en) | Face recognition method and apparatus, electronic device, and storage medium | |
CN110162670B (en) | Method and device for generating expression package | |
WO2021088790A1 (en) | Display style adjustment method and apparatus for target device | |
CN109993150B (en) | Method and device for identifying age | |
CN111767371B (en) | Intelligent question-answering method, device, equipment and medium | |
CN112153460B (en) | Video dubbing method and device, electronic equipment and storage medium | |
WO2021114979A1 (en) | Video page display method and apparatus, electronic device and computer-readable medium | |
CN111459364B (en) | Icon updating method and device and electronic equipment | |
CN111897950A (en) | Method and apparatus for generating information | |
CN110008926B (en) | Method and device for identifying age | |
CN110046571B (en) | Method and device for identifying age | |
CN111126159A (en) | Method, apparatus, electronic device, and medium for tracking pedestrian in real time | |
CN111461965B (en) | Picture processing method and device, electronic equipment and computer readable medium | |
WO2020221115A1 (en) | Method and device for displaying information | |
CN112990176A (en) | Writing quality evaluation method and device and electronic equipment | |
CN113628097A (en) | Image special effect configuration method, image recognition method, image special effect configuration device and electronic equipment | |
CN112309389A (en) | Information interaction method and device | |
CN110765304A (en) | Image processing method, image processing device, electronic equipment and computer readable medium | |
CN114550728B (en) | Method, device and electronic equipment for marking speaker | |
CN113033552B (en) | Text recognition method and device and electronic equipment | |
CN111062995B (en) | Method, apparatus, electronic device and computer readable medium for generating face image | |
CN113409208A (en) | Image processing method, device, equipment and storage medium | |
CN112309387A (en) | Method and apparatus for processing information | |
CN112418233A (en) | Image processing method, image processing device, readable medium and electronic equipment | |
CN111652432A (en) | Method and device for determining user attribute information, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20884955 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20884955 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05.09.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20884955 Country of ref document: EP Kind code of ref document: A1 |