WO2021102850A1 - 图片识别的用户界面系统、电子设备及交互方法 - Google Patents
图片识别的用户界面系统、电子设备及交互方法 Download PDFInfo
- Publication number
- WO2021102850A1 WO2021102850A1 PCT/CN2019/121748 CN2019121748W WO2021102850A1 WO 2021102850 A1 WO2021102850 A1 WO 2021102850A1 CN 2019121748 W CN2019121748 W CN 2019121748W WO 2021102850 A1 WO2021102850 A1 WO 2021102850A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- interface
- picture
- area
- function
- display screen
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 6
- 230000006870 function Effects 0.000 claims abstract description 182
- 230000000694 effects Effects 0.000 claims abstract description 17
- 230000004044 response Effects 0.000 claims description 39
- 238000006243 chemical reaction Methods 0.000 claims description 22
- 230000003993 interaction Effects 0.000 claims description 18
- 230000026058 directional locomotion Effects 0.000 claims description 9
- 238000010586 diagram Methods 0.000 description 22
- 238000012545 processing Methods 0.000 description 10
- 238000012015 optical character recognition Methods 0.000 description 5
- 238000010422 painting Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 235000013399 edible fruits Nutrition 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 230000001154 acute effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 235000013305 food Nutrition 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/5866—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/55—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04847—Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
Definitions
- the present disclosure relates to the technical field of terminals, in particular to a user interface system, electronic equipment and interaction method for picture recognition.
- the embodiments of the present disclosure provide an interactive method for image recognition through a user interface, which includes:
- Each of the plurality of first functional controls presents a first picture corresponding to each first functional control, and the presentation state of the object in the corresponding first picture in each first functional control is different;
- the picture in the picture display control is updated to the first picture or the second picture, and after the update The attribute information of the picture of will be presented on the user interface;
- the attribute information of the updated picture is presented and the selection instruction of the third function control is received, the attribute information of the updated picture is stored in the first data list.
- the attribute information of the updated picture includes at least one of text information, letter information, and numeric information.
- the attribute information of the updated picture is presented and the selection instruction of the third function control is received, the attribute information of the updated picture It is stored in the first data list, including:
- the selected attribute information is stored in the first data list after receiving the selected attribute information and receiving the selection instruction of the third function control .
- the presentation state includes: at least one of an oblique state of the object and a distortion state of the object.
- an embodiment of the present disclosure also provides a user interface system for picture recognition, which includes a plurality of first function controls, at least one second function control, and at least one third function control displayed in the user interface.
- Picture display control each of the plurality of first function controls presents a first picture corresponding to each first function control in the user interface, and the corresponding first picture in each first function control The presentation state of the object in the user interface is different;
- the second function control can be operated after selection so that the user can select and upload a second picture
- One of the plurality of first function controls is operable after being selected or after the user selects and uploads a second picture, so that the picture in the picture display control is updated to the first picture or the second picture, and at the same time Enabling the attribute information of the updated picture to be presented on the user interface;
- an embodiment of the present disclosure also provides an electronic device, including a display screen, a memory, and a processor, wherein:
- the memory connected to the display screen and the processor, is configured to store computer instructions and save data associated with the display screen;
- the processor connected to the display screen and the memory, is configured to execute the computer instructions to cause the electronic device to execute:
- a first interface is displayed on the display screen, the first interface includes a first area and a second area, the first area includes at least one first-level function classification label, and the second area includes a plurality of second-level functions Classification labels and at least one tertiary functional classification label included in each secondary functional classification label;
- the display screen In response to a trigger instruction for selecting one of the at least one primary function classification label, the display screen displays in the second area a plurality of secondary function classifications included in the selected primary function classification label Labels and at least one tertiary functional classification label included in each secondary functional classification label;
- the display screen In response to a trigger instruction for selecting one of the at least one three-level function classification label, the display screen displays a second interface; the content included in the second interface is different from the content included in the first interface.
- the first interface further includes a third area, and the third area, the first area, and the second area are along the first interface Arrange from the top to the bottom.
- the display screen when the display screen displays the first interface, in response to a directional movement trigger instruction in the second area along the first direction, the corresponding The first-level functional classification label is selected.
- the entire first interface is relatively The screen moves in the second direction, and when the first area of the first interface moves to the top of the first interface, the position of the first area remains fixed, and the second area moves in the second direction .
- the second interface when the display screen displays the second interface corresponding to the selected three-level function classification label, the second interface includes a first display area, a second display area, and a second interface. Display area, third display area and fourth display area;
- the first display area includes a three-level functional experience object graph or a three-level functional experience object effect graph;
- the second display area includes a plurality of example images distributed along a third direction, each of the plurality of example images Are smaller than the size of the picture in the first display area;
- the first display area displays the effect image and the complete image corresponding to the selected example image, and the effect image corresponds to the selected three images.
- the multiple example images In response to a directional movement trigger instruction along the third direction, the multiple example images simultaneously move along the third direction in the second display area;
- the third display area includes a first operation label for uploading a picture
- the fourth display area includes an application scenario label associated with the selected three-level function classification label.
- the first display area, the second display area, the third display area, and the fourth display area are arranged along the fourth direction on the second interface, and the first display area
- the three directions are the horizontal direction on the display screen, and the fourth direction is the vertical direction on the display screen.
- the second interface when the display screen displays the second interface corresponding to the selected three-level function classification label, the second interface further includes a fifth display area;
- the fifth display area displays the attribute information of the complete image.
- the third display area further includes a second operation label; in response to the selection of the second operation label, the attribute information is saved in the attribute information Data list.
- the second interface when the display screen displays the second interface corresponding to the selected three-level function classification label, the second interface includes a first display area, a second display area, and a second interface. Display area and third display area;
- the first display area of the second interface of the display screen In response to the input information in the second display area, the first display area of the second interface of the display screen generates a first conversion image corresponding to the input information;
- the third display area includes a third operation label and a fourth operation label, the third operation label is used to convert the first conversion image into a second conversion image, and the fourth operation label is used to convert the The first conversion image is converted into a third conversion image, and the second conversion image and the third conversion image are different.
- an embodiment of the present disclosure also provides an electronic device interaction method, wherein the electronic device includes a display screen, and the method includes:
- the first interface includes a first area and a second area
- the first area includes at least one primary function classification label
- the second area includes a plurality of secondary function classifications Labels and at least one tertiary functional classification label included in each secondary functional classification label
- controlling the display screen In response to a trigger instruction for selecting one of the at least one primary function classification label, controlling the display screen to display the multiple secondary functions included in the selected primary function classification label in the second area Classification labels and at least one tertiary functional classification label included in each secondary functional classification label;
- the display screen In response to a trigger instruction for selecting one of the at least one three-level function classification label, the display screen is controlled to display a second interface; the content included in the second interface is different from the content included in the first interface .
- FIG. 1 is a schematic structural diagram of an electronic device provided by an embodiment of the disclosure
- FIG. 2 is one of the schematic diagrams of the interface when displaying the electronic device according to an embodiment of the disclosure
- FIG. 3 is the second schematic diagram of the interface when displaying the electronic device according to an embodiment of the disclosure.
- FIG. 4 is the third schematic diagram of the interface when displaying the electronic device according to an embodiment of the present disclosure.
- FIG. 5 is the fourth schematic diagram of the interface when displaying the electronic device according to the embodiment of the present disclosure.
- FIG. 6a is one of schematic diagrams of interface changes during display of the electronic device according to an embodiment of the disclosure.
- FIG. 6b is the second schematic diagram of the interface change of the electronic device provided by the embodiment of the disclosure when displaying;
- FIG. 7 is the fifth schematic diagram of the interface when displaying the electronic device according to the embodiment of the disclosure.
- FIG. 8 is a sixth schematic diagram of an interface when displaying the electronic device according to an embodiment of the present disclosure.
- FIG. 9 is the seventh schematic diagram of the interface when displaying the electronic device according to the embodiment of the disclosure.
- FIG. 10 is the eighth schematic diagram of the interface when displaying the electronic device according to an embodiment of the disclosure.
- FIG. 11 is a ninth schematic diagram of an interface when displaying an electronic device according to an embodiment of the present disclosure.
- FIG. 12 is the tenth schematic diagram of the interface when displaying the electronic device according to the embodiment of the disclosure.
- FIG. 13 is the eleventh schematic diagram of the interface when displaying the electronic device according to the embodiment of the disclosure.
- FIG. 14 is the twelfth schematic diagram of the interface when displaying the electronic device according to the embodiment of the disclosure.
- 15 is the thirteenth diagram of the interface when displaying the electronic device according to the embodiments of the disclosure.
- 16 is the fourteenth schematic diagram of the interface when displaying the electronic device according to the embodiments of the disclosure.
- FIG. 17 is the fifteenth schematic diagram of the interface when displaying the electronic device according to the embodiments of the disclosure.
- FIG. 18 is the sixteenth schematic diagram of the interface when displaying the electronic device according to the embodiments of the disclosure.
- 19 is the seventeenth schematic diagram of the interface when displaying the electronic device provided by the embodiments of the disclosure.
- 20 is the eighteenth schematic diagram of the interface when displaying the electronic device according to the embodiments of the disclosure.
- FIG. 21 is a schematic flowchart of an interface display method provided by an embodiment of the disclosure.
- 22 is a schematic diagram of a user interface system for picture recognition provided by an embodiment of the disclosure.
- FIG. 23 is a flowchart of an interactive method for image recognition through a user interface provided by an embodiment of the disclosure.
- An electronic device provided by an embodiment of the present disclosure includes a display screen 1, a memory 2 and a processor 3, wherein:
- the memory 2 connected to the display screen 1 and the processor 3, is configured to store computer instructions and save data associated with the display screen 1;
- the processor 3 connected to the display screen 1 and the memory 2, is configured to execute computer instructions to make the electronic device execute:
- a first interface 10 is displayed on the display screen.
- the first interface 10 includes a first area A1 and a second area A2.
- the first area A1 includes at least one first-level functional classification label 1_n (n is greater than or equal to 1)
- the second area A2 includes multiple secondary function classification labels 1_nm (m is an integer greater than or equal to 1) and at least one tertiary function classification label 1_nmk(k Is an integer greater than or equal to 1);
- the display screen 1 displays in the second area A2 a plurality of secondary function classification labels 1_nm and each secondary function classification label 1_n included in the selected primary function classification label 1_n.
- At least one third-level function classification label 1_nmk included in two second-level function classification labels 1_nm (1_1 in Figure 2 is the selected first-level function classification label);
- the display screen 1 In response to the trigger instruction of selecting at least one of the three-level function classification labels 1_nmk, the display screen 1 displays a second interface 20, as shown in FIG. 7; the content included in the second interface 20 is the same as the content included in the first interface 10. Is different.
- the electronic device includes a display screen, a memory, and a processor, wherein a first interface is displayed on the display screen, and the first interface includes at least one first-level function classification label, multiple second-level function classification labels, and each At least one third-level function classification label included in the two second-level function classification labels; the second interface can be displayed by selecting the third-level function classification label, so as to experience the function effect corresponding to the third-level function classification label on the second interface.
- the electronic device can be used in the first interface to select different three-level function classification labels for experience, which can provide users with practicability and has certain tool properties.
- the primary function classification label is a summary of multiple secondary function classification labels with the same characteristics
- the secondary function classification label is a summary of multiple secondary function classification labels with the same characteristics.
- the three-level function classification labels correspond to different user experience functions.
- the names of the function classification tags at all levels can be named according to the experience functions that can be realized, which is not limited here.
- the first-level functional classification tags are computational vision, image intelligence, and human-computer interaction
- the second-level functional classification tags corresponding to computational vision are painting recognition, OCR (Optical Character Recognition, optical character recognition), and face recognition
- the three-level functional classification labels corresponding to picture recognition can be painting recognition, fruit recognition, food recognition, car recognition, plant recognition, animal recognition, etc.
- the three-level functional classification labels corresponding to OCR can be business card recognition, bill recognition, and Barcode recognition, etc.
- the three-level functional classification labels corresponding to face recognition can be facial expression recognition, face attribute recognition, acquaintance recognition, etc.
- the secondary function classification labels corresponding to image intelligence are image enhancement, image new application, image processing, and image search; among them, the three-level function classification labels corresponding to image enhancement can be HDR (High-Dynamic Range, high dynamic range). Image) processing and image ultra-high-resolution processing, etc.; the three-level function classification labels corresponding to new image applications can be artistic QR codes, etc.; the three-level function classification labels corresponding to image processing can be image segmentation migration and magic animation, etc.; The three-level functional classification labels corresponding to the image search can be the same image search and similar image search.
- HDR High-Dynamic Range, high dynamic range.
- Image processing and image ultra-high-resolution processing
- the three-level function classification labels corresponding to new image applications can be artistic QR codes, etc.
- the three-level function classification labels corresponding to image processing can be image segmentation migration and magic animation, etc.
- the three-level functional classification labels corresponding to the image search can be the same image search and similar image search.
- the second-level function classification labels corresponding to human-computer interaction are natural language processing, gesture interaction, and gesture interaction, etc.; the three-level function classification labels corresponding to natural language processing can be art question answering and knowledge graph, etc.; the third level corresponding to gesture interaction
- the functional classification label can be static gestures and dynamic gestures, etc.; the three-level functional classification label of gesture interaction can be gesture estimation, etc.
- the first interface 10 further includes a third area A3, and the third area A3, the first area A1, and the second area A2 are along the first area A3.
- the interface 10 is arranged in order from the top to the bottom.
- the third area A3 is used to display still pictures or dynamic pictures.
- the content of the picture displayed in the third area is not limited here, and can be set according to requirements.
- the content of the picture is a picture that can display information such as corporate advertisements, news, and achievements to users.
- the third area A3 further includes an operation label S1 for the user to input information.
- the name of the operation tag can be named according to actual needs.
- the name of the operation tag is cooperation consultation, which is convenient for users who are interested in seeking peace to leave contact information and cooperation information.
- the operation label S1 can be located at any position in the third area A3, which is not limited here.
- the processor is also configured to execute computer instructions to make the electronic device perform: in response to the start instruction of selecting the operation label for the user to input information, display the input information box on the new display interface, So that users can enter information.
- the first interface 10 further includes an operation icon S2 for the user to input information, and the operation icon S2 is located at a fixed position on the display screen. .
- the operation icon can be designed as various icons, such as iconic icons representing enterprises, etc., which are not limited here.
- the processor is also configured to execute computer instructions to make the electronic device perform: in response to the start instruction of selecting the operation label for the user to input information, display the input information box on the new display interface, So that users can enter information.
- the display screen when the display screen displays the first interface, in response to the directional movement trigger instruction in the second area along the first direction, the corresponding first-level function classification label is selected.
- the first direction may be the horizontal direction of the display screen, that is, when the display screen displays the first interface, when the second area slides along the first direction X, the first-level function classification label may be switched.
- the first-level function classification label may be switched.
- the selected The classification label is changed from 1_1 to the primary function classification label 1_2, and the second area A2 simultaneously displays multiple secondary function classification labels 1_21, 1_22, 1_23 and each secondary function included in the selected primary function classification label 1_2
- the classification label 1_2m includes at least one three-level functional classification label 1_2mk.
- 1_21 includes 1_121 and 1_122
- 1_22 includes 1_221 and 1_222
- 1_23 includes 1_231 and 1_232.
- the entire first interface moves relative to the display screen in the second direction, and
- the position of the first area remains fixed and the second area moves in the second direction.
- the second direction may be the vertical direction of the display screen.
- the first interface includes the first area and the second area, as shown in FIG. 6a
- the first area A1 and the second area A2 both move in the second direction Y relative to the display screen.
- the first area A1 moves to the top of the first interface
- the position of the first area A1 remains fixed
- the second area A2 moves in the second direction.
- the first area A1 and the second area A2 move in the second direction Y relative to the display screen, which is not limited here.
- the first interface includes the first area, the second area, and the third area, as shown in FIG. 6b
- the first area A1, the second area A2, and the third area A3 are relative to the display screen. All move in the second direction Y.
- the first area A1 moves to the top of the first interface
- the first area A1 remains fixed in position
- the second area A2 moves in the second direction. In this way, it is convenient for the user to determine the first-level functional classification label to which the content of the currently displayed second area belongs.
- the first area A1, the second area A2, and the third area A3 all move in the second direction Y relative to the display screen, which is not limited here.
- the second interface 20 when the display screen displays the second interface corresponding to the selected three-level function classification label, as shown in FIG. 7, the second interface 20 includes the first display area B1 , The second display area B2, the third display area B3, and the fourth display area B4;
- the first display area B1 includes a three-level functional experience object graph or a three-level functional experience object effect graph, such as picture 1 in FIG. 7;
- the second display area B2 includes multiple example graphs distributed along the third direction X', such as Example Figure 1, Example Figure 2, and Example Figure 3 in 7, each of the multiple example images has a size smaller than the size of the image in the first display area;
- the first display area B1 displays the effect image and the complete image corresponding to the selected example image, and the effect image corresponds to the function corresponding to the selected three-level function classification label ;
- a plurality of example images simultaneously move along the third direction X'in the second display area B2;
- the third display area B3 includes a first operation label S3 for uploading pictures
- the fourth display area B4 includes an application scenario label S4 associated with the selected three-level function classification label.
- the example image can be configured by the background server, so that the user can directly experience the effect of the algorithm without uploading the image.
- the operation label S3 for uploading pictures in the third display area B3 facilitates the user to select a local picture or take a photo for upload.
- the first display area B1, the second display area B2, the third display area B3, and the fourth display area B4 are located along the second interface 20. Arrangement in the fourth direction Y'.
- the third direction is the horizontal direction of the display screen
- the fourth direction is the vertical direction of the display screen
- each label is not specifically limited, and in practical applications, the label can be named according to the function that the label needs to implement.
- the second interface shown in FIG. 7 is described through specific embodiments.
- the second interface 20 displayed on the display screen is as shown in Figure 8, where 3 sample images are displayed in area B2, and the selected sample image is the first image from left to right.
- An example image, the first example image displayed in the B2 area is only a partial image, and the B1 area displayed is the complete image of the first example image, and it is the complete image after HDR processing.
- the second interface 20 displayed on the display screen is shown in Figure 9.
- the second display area B2 displays several style example images, and the user can swipe left and right to view more styles .
- the second display area B2 when the second interface 20 displayed on the display screen, also includes an operation label for comparison and switching, for example, when the image is selected 7 and the comparison button shown in FIG. 8, the picture in the first display area B1 can be switched between the experience object effect picture and the original picture, and the processing effect can be intuitively compared.
- the second interface when the display screen displays the second interface corresponding to the selected three-level function classification label, as shown in FIG. 10, the second interface further includes a fifth display area B5;
- the fifth display area B5 displays the attribute information of the complete image.
- the first display area B1, the second display area B2, the fifth display area B5, the third display area B3, and the fourth display area B4 are on the second interface. 20 is arranged along the fourth direction Y'. Further, the fifth display area B5 may be set below the first display area B1 and the second display area B2 to facilitate the comparison of the attribute information of the complete image and the complete image.
- the second interface shown in FIG. 10 is described through specific embodiments.
- the attribute information may include: category, subject, content, etc., which is not limited here.
- a confidence level can be added to the back of each recognition result, which is not limited here.
- the attribute information may include various text information on the business card, and the second interface 20 displayed on the display screen is as shown in FIG. 12.
- the attribute information may include the barcode recognition result and the text recognition result, and the second interface 20 displayed on the display screen is as shown in FIG. 13.
- the first display area B1 further includes the recognition result of the experience object graph, and the recognition result is superimposed on the three-level functional experience object graph.
- the recognition result may include the fruit name and the confidence level
- the second interface 20 displayed on the display screen is, for example, as shown in FIG. 14.
- the third display area further includes a second operation label; in response to the selection of the second operation label, the attribute information is saved in the attribute information data list.
- the third display area B3 also includes an operation label for saving attribute information to the address book. It is convenient for users to save the attribute information in the address book of the mobile phone.
- the attribute information includes address, mobile phone number, etc. The user can select the attribute information that needs to be saved according to needs, for example, select the mobile phone number and then save the mobile phone number to the address book.
- the second interface when the display screen displays the second interface corresponding to the selected three-level function classification label, the second interface includes a first display area, a second display area, and a third display area. Display area;
- the first display area of the second interface of the display screen In response to the input information in the second display area, the first display area of the second interface of the display screen generates a first conversion image corresponding to the input information;
- the third display area includes a third operation label and a fourth operation label.
- the third operation label is used to convert the first conversion image into a second conversion image
- the fourth operation label is used to convert the first conversion image into a third conversion image.
- the second conversion image and the third conversion image are different.
- the third operation label is to generate a first type of two-dimensional code S5
- the fourth operation label is to generate a second type of two-dimensional code S6.
- the first type of two-dimensional code is The QR code that can beautify the background (the QR code in the C1 area in Figure 16).
- the second type of QR code is the QR code that can beautify the structure (the QR code in the C1 area in Figure 17).
- the second conversion The operation of the image is obtained by fusing the background image (the image of two horses in Figure 16) and the QR code in Figure 15.
- the background image is not referred to when processing the QR code in Figure 15; and
- the two-dimensional code image in Figure 17 (equivalent to the third converted image) is in the process of fusing the background image and the two-dimensional code, and the background image is referred to when processing the two-dimensional code in Figure 15.
- the second image in Figure 17 is obtained.
- the distribution of black and white points in the dimension code is related to the distribution of light and dark in the background image.
- the selected three-level function classification label as an example that can realize the function of encoding a paragraph of text into a two-dimensional code.
- the second interface 20 includes a first display area B1, a second display area B2, and a third display area. B3;
- the first display area B1 includes a two-dimensional code generated according to the text
- the second display area B2 includes a text editing area for generating a QR code of the first display area B1;
- the third display area B3 includes an operation label S5 for generating a first type of two-dimensional code and an operation label S6 for generating a second type of two-dimensional code, wherein the first type of two-dimensional code and the second type of two-dimensional code are different of.
- the second interface 20 also has a fourth display area B4, where the fourth display area B4 includes an application scenario label S4 associated with the selected three-level function classification label.
- the text editing area of the QR code in the second display area B2 it is convenient for users to input text, and the background server can also configure a section of default text, so that users can directly experience the function effect without input.
- the first display area B1 further includes an operation label S7 for storing a two-dimensional code. It is convenient for users to save the generated QR code locally on the mobile phone.
- the first type of two-dimensional code and the second type of two-dimensional code are different, which means that the two-dimensional code has different manifestations, but the information contained in the two-dimensional code can be Are the same.
- the first type of two-dimensional code is a two-dimensional code that can beautify the background
- the second type of two-dimensional code is a two-dimensional code that can beautify the structure, which is not limited here.
- the electronic device in response to selecting one of the operation labels used to generate the first type of two-dimensional code or the operation labels used to generate the second type of two-dimensional code Trigger the instruction to display a third interface on the display screen, the third interface including the first functional area, the second functional area, and the third functional area distributed along the third direction;
- the first functional area includes a QR code with a background image
- the second functional area includes a text editing area for modifying the QR code
- the third functional area includes operation labels for changing the background image.
- the first type of two-dimensional code is a two-dimensional code that can beautify the background
- the electronic device in response to selecting an operation label for generating the first type of two-dimensional code, as shown in FIG. As shown in 16, on the third interface 30:
- the first functional area C1 includes a two-dimensional code with a background image
- the second functional area C2 includes a text editing area for modifying the QR code
- the third functional area C3 includes operation labels for changing the background picture.
- the second type of two-dimensional code is a two-dimensional code that can beautify the structure
- the electronic device in response to selecting an operation label for generating the second type of two-dimensional code, as shown in FIG.
- the two-dimensional code with background image in the first functional area C1 is adjusted by adjusting the black and white elements in the two-dimensional code according to the light and dark areas of the background image to improve the artistic two-dimensionality.
- the aesthetic effect of the code is a two-dimensional code that can beautify the structure.
- the first functional area C1 further includes an operation label S8 for storing a two-dimensional code.
- the processor is further configured to execute computer instructions to make the electronic device perform:
- the application scenario label is selected, and the link interface corresponding to the selected application scenario label is displayed on the display screen:
- the link interface corresponding to the application scenario label is shown in Figure 18, which introduces the application scenario of the three-level function classification label, where the application scenario includes an introduction and a detailed introduction.
- the "Contact Us” and “Feedback” buttons click to open the corresponding display interface.
- Figure 19 is a display interface opened by clicking the "Contact Us” button.
- This interface provides users with a window for business contact with the company. The user fills in the company name, name, contact number, email address, specific description and other information. , And click the submit button to send your information to the backend, and wait for the company to get in touch with it. After clicking the submit button, the user will get a "submission successful" prompt and jump to the first interface.
- Figure 20 shows the display interface opened by clicking the "Feedback” button.
- This interface provides the user with a window to provide feedback on the algorithm.
- the user fills in specific comments and clicks the submit button to send his information to the background.
- After clicking the submit button The user will get the prompt "Submission is successful, thank you for your valuable comments", and jump to the first interface.
- the present disclosure is only schematically illustrated by the above-mentioned embodiments, and is not specifically limited thereto.
- the above-mentioned electronic devices provided by the embodiments of the present disclosure can allow users to intuitively realize various intelligent experiences. And through the first interface carousel pictures to promote corporate information to users, according to the functions of functional classification tags at all levels to show users the company’s artificial intelligence functions, users can use default pictures or load local pictures to experience various algorithm functions and effects, and users Part of the processing results can be downloaded, users can understand the application scenarios of each function, users can put forward their own feedback on each function, and can conduct business contacts with enterprises and seek cooperation.
- embodiments of the present disclosure also provide an interface display method, which is applied to an electronic device with a display screen.
- the interface display method includes:
- the display screen displays a first interface, the first interface includes a first area and a second area, the first area includes at least one first-level function classification label, and the second area includes a plurality of second-level function classification labels and each second-level function At least one three-level functional classification label included in the classification label;
- the display screen displays in the second area the multiple second-level function classification labels included in the selected first-level function classification label and each second-level function classification label. At least one three-level functional classification label included in the functional classification label;
- the display screen In response to a trigger instruction of selecting one of the at least one three-level function classification label, the display screen displays a second interface; the content included in the second interface is different from the content included in the first interface.
- embodiments of the present disclosure also provide a user interface system for picture recognition, which includes a plurality of first function controls, at least one second function control, and at least one third function control displayed in the user interface , Picture display control; each of the multiple first function controls presents the first picture corresponding to each first function control in the user interface, and the object in the first picture corresponding to each first function control is The presentation status in the user interface is different;
- the second function control can be operated after selection so that the user can select and upload the second picture
- One of the multiple first function controls can be operated after selection or after the user selects and uploads the second picture, so that the picture in the picture display control is updated to the first picture or the second picture, and the updated The attribute information of the picture will be presented on the user interface;
- FIG. 22 illustrates the user interface system provided by the embodiment of the present disclosure.
- the user interface can realize the function of recognizing the information in the business card uploaded by the user, and specifically includes three first ones displayed in the user interface.
- the first picture corresponding to a functional control (the photo of the business card in Figure 22), and the presentation state of the object in the first picture corresponding to each first functional control (referring to the business card in Figure 22) in the user interface is Specifically, in Figure 22, the business card in the picture corresponding to 101 is a business card photo taken from an angle close to the lower edge of the business card, so the business card in the picture is trapezoidal (that is, the distorted state of the business card in the picture).
- the business card in the picture corresponding to 102 is at an acute angle to the horizontal (and the slanted state of the business card in the picture); the business card in the picture corresponding to 103 is taken from a straight down angle, so the long and short sides of the business card are both It is parallel to the long and short sides of the forehead of the picture; the above three types of business cards placed and photographed can be used to recognize the information on the business card and display it in the recognition result column.
- the second function control 201 (the upload picture control in FIG. 22) can be operated after being selected by the user to allow the user to select and upload a user-defined second picture; the specific user selection method can be to touch and click the second function control, The second function control can also be selected by mouse, gesture, voice command or other methods; the custom picture uploaded by the user can be the picture stored on the terminal device, or the picture taken by the user in real time or downloaded from the Internet or cloud server image.
- One of the multiple first function controls can be operated after selection or after the user selects and uploads the second picture, so that the picture in the picture display control is updated to the first picture or the second picture, and the updated
- the attribute information of the picture will be presented on the user interface; for example, in Figure 22, the selected first function control is 101, and the picture corresponding to 101 is displayed in the picture display control 401. When 101 is selected, it is recognized The result will also show the relevant information in the business card corresponding to 101, including name, position, company, address, email address, and mobile phone number. If the user uploads a custom picture, the picture display control 401 displays the custom picture uploaded by the user at this time.
- the recognition result is the recognition result of the relevant information in the user-defined picture.
- the relevant information here can be included in the custom picture. Text information, letter information, number information, etc.
- the attribute information of the updated picture for example, the relevant information of the business card corresponding to 101 in FIG. 22
- the third function control 301 it is possible to operate so that the attribute information of the updated picture is stored in the first Data list. For example, you can select 301 by clicking and save the relevant information in the business card in the relevant database.
- the function of the control 301 in Figure 22 can save the phone number to the user's address book.
- the first data list here is User address book list. Of course, the user can also set the function of the control 301 according to requirements.
- embodiments of the present disclosure also provide an interactive method for image recognition through a user interface, as shown in FIG. 23, including:
- S201 Provide a plurality of first function controls, at least one second function control, at least one third function control, and a picture display control in the user interface;
- Each of the plurality of first functional controls presents a first picture corresponding to each first functional control, and the presentation state of an object in the corresponding first picture in each first functional control is different;
- the attribute information of the updated picture includes at least one of text information, letter information, and numeric information.
- the attribute information of the picture may include name, position, company, address, and email address.
- the attribute information of the updated picture is stored in the first data
- the list includes:
- the selected attribute information is stored in the first data list.
- each of the name, position, company, address, and mailbox in the attribute information of the picture can be individually selected and stored in the first data list.
- the presentation state includes at least one of the tilt state of the object and the distortion state of the object.
- the presentation state of the object in each first picture is different.
- the business card in the picture corresponding to 101 is in a distorted state
- the business card in the picture corresponding to 102 is in an oblique state.
- the electronic devices, computer storage media, computer program products, or chips provided in the embodiments of the present application are all used to execute the corresponding methods provided above. Therefore, the beneficial effects that can be achieved can refer to the above provided The beneficial effects of the corresponding method will not be repeated here.
- the disclosed device and method can be implemented in other ways.
- the device embodiments described above are only illustrative, for example, the division of modules or units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another device, or some features can be ignored or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or multiple physical units, that is, they may be located in one place, or they may be distributed to multiple different places. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated unit can be realized in the form of hardware or software function unit.
- the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium.
- the technical solutions of the embodiments of the present application are essentially or the part that contributes to the prior art, or all or part of the technical solutions can be embodied in the form of a software product, and the software product is stored in a storage medium. It includes several instructions to make a device (which may be a single-chip microcomputer, a chip, etc.) or a processor (processor) execute all or part of the steps of the methods of the various embodiments of the present application.
- the aforementioned storage media include: U disk, mobile hard disk, read only memory (read only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program codes.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Library & Information Science (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (15)
- 一种通过用户界面进行图片识别的交互方法,其中,包括:在所述用户界面中提供多个第一功能控件、至少一个第二功能控件、至少一个第三功能控件和图片展示控件;多个第一功能控件中的每一个呈现的是每一个第一功能控件对应的第一图片,每个第一功能控件中对应的第一图片中的对象的呈现状态是不同的;响应于对所述第二功能控件的选定指令,接收用户选择并上传的第二图片;在接收对多个第一功能控件中的一个的选择指示或者接收用户选择并上传第二图片的指令后,所述图片展示控件中的图片被更新为第一图片或第二图片,同时更新后的图片的属性信息会在所述用户界面上进行呈现;在呈现所述更新后的图片的属性信息后并且接收对所述第三功能控件的选择指示后,所述更新后的图片的属性信息被存储到第一数据列表中。
- 根据权利要求1所述的交互方法,所述更新后的图片的属性信息包括文字信息、字母信息和数字信息中的至少一个。
- 根据权利要求1所述的交互方法,在呈现所述更新后的图片的属性信息后并且接收对所述第三功能控件的选择指示后,所述更新后的图片的属性信息被存储到第一数据列表中,包括:当所述属性信息包含多个时,在接收到选定的属性信息和接收到对所述第三功能控件的选择指示后,所述选定的属性信息被存储到所述第一数据列表中。
- 根据权利要求1所述的交互方法,所述呈现状态包含:对象的倾斜状态和对象的畸变状态中的至少一个。
- 一种图片识别的用户界面系统,其中,包括在所述用户界面中展现的多个第一功能控件、至少一个第二功能控件和至少一个第三功能控件、图片展示控件;多个第一功能控件中的每一个在所述用户界面中呈现的是每一个 第一功能控件对应的第一图片,每个第一功能控件中对应的第一图片中的对象在所述用户界面中的呈现状态是不同的;所述第二功能控件在选择后可操作以使用户选择并上传第二图片;所述多个第一功能控件中的一个在选择后或者在用户选择并上传第二图片的操作后可操作以使所述图片展示控件中的图片被更新为第一图片或第二图片,同时使更新后的图片的属性信息会在所述用户界面上进行呈现;在呈现所述更新后的图片的属性信息后并且对所述第三功能控件的选择后可操作以使所述更新后的图片的属性信息被存储到第一数据列表中。
- 一种电子设备,包括显示屏、存储器和处理器,其中:所述存储器,与所述显示屏和处理器连接,被配置为存储计算机指令和保存与所述显示屏关联的数据;所述处理器,与所述显示屏和存储器连接,被配置为执行所述计算机指令以使得所述电子设备执行:在所述显示屏上显示第一界面,所述第一界面包括第一区域和第二区域,所述第一区域包括至少一个一级功能分类标签,所述第二区域包括多个二级功能分类标签和每个二级功能分类标签所包括的至少一个三级功能分类标签;响应于选定所述至少一个一级功能分类标签中的一个的触发指令,所述显示屏在所述第二区域显示选定的所述一级功能分类标签所包括的多个二级功能分类标签和每个二级功能分类标签所包括的至少一个三级功能分类标签;响应于选定所述至少一个三级功能分类标签中的一个的触发指令,所述显示屏显示第二界面;所述第二界面包括的内容与所述第一界面包括的内容是不同的。
- 根据权利要求6所述的电子设备,其中,所述第一界面还包括第三区域,且所述第三区域、所述第一区域和所述第二区域沿所述第一界面顶端向底端方向依次排列。
- 根据权利要求6所述的电子设备,其中,当所述显示屏显示所述第一界面时,响应于在所述第二区域沿第一方向的定向移动触发指令,对应的所 述一级功能分类标签被选定。
- 根据权利要求7所述的电子设备,其中,当所述显示屏显示所述第一界面时,响应于沿第二方向的定向移动触发指令,整个所述第一界面相对所述显示屏沿第二方向移动,并且当所述第一界面的第一区域移动到第一界面的顶端时,所述第一区域保持位置固定不变,所述第二区域沿所述第二方向移动。
- 根据权利要求6所述的电子设备,其中,当所述显示屏显示所选定的三级功能分类标签对应的第二界面时,所述第二界面包括第一显示区域、第二显示区域、第三显示区域和第四显示区域;所述第一显示区域包括三级功能体验对象图或三级功能体验对象效果图;所述第二显示区域包括沿第三方向分布的多个示例图,所述多个示例图中的每一个的尺寸都小于所述所述第一显示区域内图片的尺寸;响应于对所述多个示例图中的一个的选定指令,所述第一显示区域显示选定的示例图对应的效果图和完整图,所述效果图对应于所述所选定的三级功能分类标签对应的功能;响应于沿所述第三方向的定向移动触发指令,所述多个示例图在所述第二显示区域同时沿所述第三方向移动;所述第三显示区域包括用于上传图片的第一操作标签;所述第四显示区域包括与所选定的三级功能分类标签相关联的应用场景标签。
- 根据权利要求10所述的电子设备,其中,所述第一显示区域、第二显示区域、第三显示区域和第四显示区域在第二界面沿第四方向排布,所述第三方向为所述显示屏上的水平方向,所述第四方向为所述显示屏上的垂直方向。
- 根据权利要求10所述的电子设备,其中,当所述显示屏显示所选定的三级功能分类标签对应的第二界面时,所述第二界面还包括第五显示区域;所述第五显示区域显示所述完整图的属性信息。
- 根据权利要求12所述的电子设备,其中,所述第三显示区域还包括第二操作标签;响应于对所述第二操作标签的选定,所述属性信息被保存到属性信息数据列表中。
- 根据权利要求6所述的电子设备,其中,当所述显示屏显示所选定的三级功能分类标签对应的第二界面时,所述第二界面包括第一显示区域、第二显示区域和第三显示区域;响应于在所述第二显示区域的输入信息,所述显示屏的第二界面的第一显示区域生成对应所述输入信息的第一转换图像;所述第三显示区域包括第三操作标签和第四操作标签,所述第三操作标签用于将所述第一转换图像转换为第二转换图像,所述第四操作标签用于将所述第一转换图像转换为第三转换图像,所述第二转换图像和第三转换图像是不同的。
- 一种电子设备的交互方法,其中,所述电子设备包含显示屏,所述方法包括:控制所述显示屏显示第一界面,所述第一界面包括第一区域和第二区域,所述第一区域包括至少一个一级功能分类标签,所述第二区域包括多个二级功能分类标签和每个二级功能分类标签所包括的至少一个三级功能分类标签;响应于选定所述至少一个一级功能分类标签中的一个的触发指令,控制所述显示屏在所述第二区域显示选定的所述一级功能分类标签所包括的多个二级功能分类标签和每个二级功能分类标签所包括的至少一个三级功能分类标签;响应于选定所述至少一个三级功能分类标签中的一个的触发指令,控制所述显示屏显示第二界面;所述第二界面包括的内容与所述第一界面包括的内容是不同的。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/121748 WO2021102850A1 (zh) | 2019-11-28 | 2019-11-28 | 图片识别的用户界面系统、电子设备及交互方法 |
US17/255,458 US20210373752A1 (en) | 2019-11-28 | 2019-11-28 | User interface system, electronic equipment and interaction method for picture recognition |
CN201980002707.7A CN113260970B (zh) | 2019-11-28 | 2019-11-28 | 图片识别的用户界面系统、电子设备及交互方法 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/121748 WO2021102850A1 (zh) | 2019-11-28 | 2019-11-28 | 图片识别的用户界面系统、电子设备及交互方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021102850A1 true WO2021102850A1 (zh) | 2021-06-03 |
Family
ID=76129786
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/121748 WO2021102850A1 (zh) | 2019-11-28 | 2019-11-28 | 图片识别的用户界面系统、电子设备及交互方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20210373752A1 (zh) |
CN (1) | CN113260970B (zh) |
WO (1) | WO2021102850A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114779975A (zh) * | 2022-03-31 | 2022-07-22 | 北京至简墨奇科技有限公司 | 指掌纹图像检视界面的处理方法、装置及电子系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108205406A (zh) * | 2016-12-19 | 2018-06-26 | 三星电子株式会社 | 电子设备及其图像同步方法 |
CN108415635A (zh) * | 2017-02-10 | 2018-08-17 | 广州森成和信息技术有限公司 | 一种图片分享系统 |
CN109085982A (zh) * | 2018-06-08 | 2018-12-25 | Oppo广东移动通信有限公司 | 内容识别方法、装置及移动终端 |
CN110097057A (zh) * | 2018-01-31 | 2019-08-06 | 精工爱普生株式会社 | 图像处理装置以及存储介质 |
CN110135929A (zh) * | 2018-02-02 | 2019-08-16 | 英属开曼群岛商玩美股份有限公司 | 实行于虚拟化妆应用程序的系统、方法及存储媒体 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10331297B2 (en) * | 2014-05-30 | 2019-06-25 | Apple Inc. | Device, method, and graphical user interface for navigating a content hierarchy |
US9606716B2 (en) * | 2014-10-24 | 2017-03-28 | Google Inc. | Drag-and-drop on a mobile device |
-
2019
- 2019-11-28 US US17/255,458 patent/US20210373752A1/en active Pending
- 2019-11-28 WO PCT/CN2019/121748 patent/WO2021102850A1/zh active Application Filing
- 2019-11-28 CN CN201980002707.7A patent/CN113260970B/zh active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108205406A (zh) * | 2016-12-19 | 2018-06-26 | 三星电子株式会社 | 电子设备及其图像同步方法 |
CN108415635A (zh) * | 2017-02-10 | 2018-08-17 | 广州森成和信息技术有限公司 | 一种图片分享系统 |
CN110097057A (zh) * | 2018-01-31 | 2019-08-06 | 精工爱普生株式会社 | 图像处理装置以及存储介质 |
CN110135929A (zh) * | 2018-02-02 | 2019-08-16 | 英属开曼群岛商玩美股份有限公司 | 实行于虚拟化妆应用程序的系统、方法及存储媒体 |
CN109085982A (zh) * | 2018-06-08 | 2018-12-25 | Oppo广东移动通信有限公司 | 内容识别方法、装置及移动终端 |
Also Published As
Publication number | Publication date |
---|---|
CN113260970A (zh) | 2021-08-13 |
US20210373752A1 (en) | 2021-12-02 |
CN113260970B (zh) | 2024-01-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2023201500B2 (en) | Systems, devices, and methods for dynamically providing user interface controls at a touch-sensitive secondary display | |
US10248994B2 (en) | Methods and systems for automatically searching for related digital templates during media-based project creation | |
US8799829B2 (en) | Methods and systems for background uploading of media files for improved user experience in production of media-based products | |
Liu | Natural user interface-next mainstream product user interface | |
US8108776B2 (en) | User interface for multimodal information system | |
US10649618B2 (en) | System and method for creating visual representation of data based on generated glyphs | |
US20140045163A1 (en) | Interactive response system and question generation method for interactive response system | |
JP2024501558A (ja) | 表示制御方法、装置、電子機器及び媒体 | |
TW202046082A (zh) | 聊天執行緒的顯示方法、電腦可讀取記錄媒體及電腦裝置 | |
WO2021102850A1 (zh) | 图片识别的用户界面系统、电子设备及交互方法 | |
US11625148B2 (en) | Intelligent snap assist recommendation model | |
Qi et al. | Visual design of smartphone app interface based on user experience | |
WO2022245483A1 (en) | Management of presentation content including generation and rendering of a transparent glassboard representation | |
CN112269520A (zh) | 元素显示控制方法、装置、交互平板及存储介质 | |
KR102370552B1 (ko) | 형용사 키워드 기반 로고 생성 시스템 및 그 방법 | |
Huang et al. | Research on the Communication Mode of Mobile Applications Under the Human-Computer Interaction Mode | |
Fischer | End-User Programming of Virtual Assistant Skills and Graphical User Interfaces | |
CN116893806A (zh) | 流程编辑器、流程编辑方法、电子设备及存储介质 | |
CN116245615A (zh) | 搜索方法、装置和电子设备 | |
Atanasova et al. | Adaptive user interfaces in software systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19954449 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19954449 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19954449 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08.02.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19954449 Country of ref document: EP Kind code of ref document: A1 |