WO2021102850A1 - 图片识别的用户界面系统、电子设备及交互方法 - Google Patents

图片识别的用户界面系统、电子设备及交互方法 Download PDF

Info

Publication number
WO2021102850A1
WO2021102850A1 PCT/CN2019/121748 CN2019121748W WO2021102850A1 WO 2021102850 A1 WO2021102850 A1 WO 2021102850A1 CN 2019121748 W CN2019121748 W CN 2019121748W WO 2021102850 A1 WO2021102850 A1 WO 2021102850A1
Authority
WO
WIPO (PCT)
Prior art keywords
interface
picture
area
function
display screen
Prior art date
Application number
PCT/CN2019/121748
Other languages
English (en)
French (fr)
Inventor
刘瀚文
那彦波
朱丹
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to PCT/CN2019/121748 priority Critical patent/WO2021102850A1/zh
Priority to US17/255,458 priority patent/US20210373752A1/en
Priority to CN201980002707.7A priority patent/CN113260970B/zh
Publication of WO2021102850A1 publication Critical patent/WO2021102850A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing

Definitions

  • the present disclosure relates to the technical field of terminals, in particular to a user interface system, electronic equipment and interaction method for picture recognition.
  • the embodiments of the present disclosure provide an interactive method for image recognition through a user interface, which includes:
  • Each of the plurality of first functional controls presents a first picture corresponding to each first functional control, and the presentation state of the object in the corresponding first picture in each first functional control is different;
  • the picture in the picture display control is updated to the first picture or the second picture, and after the update The attribute information of the picture of will be presented on the user interface;
  • the attribute information of the updated picture is presented and the selection instruction of the third function control is received, the attribute information of the updated picture is stored in the first data list.
  • the attribute information of the updated picture includes at least one of text information, letter information, and numeric information.
  • the attribute information of the updated picture is presented and the selection instruction of the third function control is received, the attribute information of the updated picture It is stored in the first data list, including:
  • the selected attribute information is stored in the first data list after receiving the selected attribute information and receiving the selection instruction of the third function control .
  • the presentation state includes: at least one of an oblique state of the object and a distortion state of the object.
  • an embodiment of the present disclosure also provides a user interface system for picture recognition, which includes a plurality of first function controls, at least one second function control, and at least one third function control displayed in the user interface.
  • Picture display control each of the plurality of first function controls presents a first picture corresponding to each first function control in the user interface, and the corresponding first picture in each first function control The presentation state of the object in the user interface is different;
  • the second function control can be operated after selection so that the user can select and upload a second picture
  • One of the plurality of first function controls is operable after being selected or after the user selects and uploads a second picture, so that the picture in the picture display control is updated to the first picture or the second picture, and at the same time Enabling the attribute information of the updated picture to be presented on the user interface;
  • an embodiment of the present disclosure also provides an electronic device, including a display screen, a memory, and a processor, wherein:
  • the memory connected to the display screen and the processor, is configured to store computer instructions and save data associated with the display screen;
  • the processor connected to the display screen and the memory, is configured to execute the computer instructions to cause the electronic device to execute:
  • a first interface is displayed on the display screen, the first interface includes a first area and a second area, the first area includes at least one first-level function classification label, and the second area includes a plurality of second-level functions Classification labels and at least one tertiary functional classification label included in each secondary functional classification label;
  • the display screen In response to a trigger instruction for selecting one of the at least one primary function classification label, the display screen displays in the second area a plurality of secondary function classifications included in the selected primary function classification label Labels and at least one tertiary functional classification label included in each secondary functional classification label;
  • the display screen In response to a trigger instruction for selecting one of the at least one three-level function classification label, the display screen displays a second interface; the content included in the second interface is different from the content included in the first interface.
  • the first interface further includes a third area, and the third area, the first area, and the second area are along the first interface Arrange from the top to the bottom.
  • the display screen when the display screen displays the first interface, in response to a directional movement trigger instruction in the second area along the first direction, the corresponding The first-level functional classification label is selected.
  • the entire first interface is relatively The screen moves in the second direction, and when the first area of the first interface moves to the top of the first interface, the position of the first area remains fixed, and the second area moves in the second direction .
  • the second interface when the display screen displays the second interface corresponding to the selected three-level function classification label, the second interface includes a first display area, a second display area, and a second interface. Display area, third display area and fourth display area;
  • the first display area includes a three-level functional experience object graph or a three-level functional experience object effect graph;
  • the second display area includes a plurality of example images distributed along a third direction, each of the plurality of example images Are smaller than the size of the picture in the first display area;
  • the first display area displays the effect image and the complete image corresponding to the selected example image, and the effect image corresponds to the selected three images.
  • the multiple example images In response to a directional movement trigger instruction along the third direction, the multiple example images simultaneously move along the third direction in the second display area;
  • the third display area includes a first operation label for uploading a picture
  • the fourth display area includes an application scenario label associated with the selected three-level function classification label.
  • the first display area, the second display area, the third display area, and the fourth display area are arranged along the fourth direction on the second interface, and the first display area
  • the three directions are the horizontal direction on the display screen, and the fourth direction is the vertical direction on the display screen.
  • the second interface when the display screen displays the second interface corresponding to the selected three-level function classification label, the second interface further includes a fifth display area;
  • the fifth display area displays the attribute information of the complete image.
  • the third display area further includes a second operation label; in response to the selection of the second operation label, the attribute information is saved in the attribute information Data list.
  • the second interface when the display screen displays the second interface corresponding to the selected three-level function classification label, the second interface includes a first display area, a second display area, and a second interface. Display area and third display area;
  • the first display area of the second interface of the display screen In response to the input information in the second display area, the first display area of the second interface of the display screen generates a first conversion image corresponding to the input information;
  • the third display area includes a third operation label and a fourth operation label, the third operation label is used to convert the first conversion image into a second conversion image, and the fourth operation label is used to convert the The first conversion image is converted into a third conversion image, and the second conversion image and the third conversion image are different.
  • an embodiment of the present disclosure also provides an electronic device interaction method, wherein the electronic device includes a display screen, and the method includes:
  • the first interface includes a first area and a second area
  • the first area includes at least one primary function classification label
  • the second area includes a plurality of secondary function classifications Labels and at least one tertiary functional classification label included in each secondary functional classification label
  • controlling the display screen In response to a trigger instruction for selecting one of the at least one primary function classification label, controlling the display screen to display the multiple secondary functions included in the selected primary function classification label in the second area Classification labels and at least one tertiary functional classification label included in each secondary functional classification label;
  • the display screen In response to a trigger instruction for selecting one of the at least one three-level function classification label, the display screen is controlled to display a second interface; the content included in the second interface is different from the content included in the first interface .
  • FIG. 1 is a schematic structural diagram of an electronic device provided by an embodiment of the disclosure
  • FIG. 2 is one of the schematic diagrams of the interface when displaying the electronic device according to an embodiment of the disclosure
  • FIG. 3 is the second schematic diagram of the interface when displaying the electronic device according to an embodiment of the disclosure.
  • FIG. 4 is the third schematic diagram of the interface when displaying the electronic device according to an embodiment of the present disclosure.
  • FIG. 5 is the fourth schematic diagram of the interface when displaying the electronic device according to the embodiment of the present disclosure.
  • FIG. 6a is one of schematic diagrams of interface changes during display of the electronic device according to an embodiment of the disclosure.
  • FIG. 6b is the second schematic diagram of the interface change of the electronic device provided by the embodiment of the disclosure when displaying;
  • FIG. 7 is the fifth schematic diagram of the interface when displaying the electronic device according to the embodiment of the disclosure.
  • FIG. 8 is a sixth schematic diagram of an interface when displaying the electronic device according to an embodiment of the present disclosure.
  • FIG. 9 is the seventh schematic diagram of the interface when displaying the electronic device according to the embodiment of the disclosure.
  • FIG. 10 is the eighth schematic diagram of the interface when displaying the electronic device according to an embodiment of the disclosure.
  • FIG. 11 is a ninth schematic diagram of an interface when displaying an electronic device according to an embodiment of the present disclosure.
  • FIG. 12 is the tenth schematic diagram of the interface when displaying the electronic device according to the embodiment of the disclosure.
  • FIG. 13 is the eleventh schematic diagram of the interface when displaying the electronic device according to the embodiment of the disclosure.
  • FIG. 14 is the twelfth schematic diagram of the interface when displaying the electronic device according to the embodiment of the disclosure.
  • 15 is the thirteenth diagram of the interface when displaying the electronic device according to the embodiments of the disclosure.
  • 16 is the fourteenth schematic diagram of the interface when displaying the electronic device according to the embodiments of the disclosure.
  • FIG. 17 is the fifteenth schematic diagram of the interface when displaying the electronic device according to the embodiments of the disclosure.
  • FIG. 18 is the sixteenth schematic diagram of the interface when displaying the electronic device according to the embodiments of the disclosure.
  • 19 is the seventeenth schematic diagram of the interface when displaying the electronic device provided by the embodiments of the disclosure.
  • 20 is the eighteenth schematic diagram of the interface when displaying the electronic device according to the embodiments of the disclosure.
  • FIG. 21 is a schematic flowchart of an interface display method provided by an embodiment of the disclosure.
  • 22 is a schematic diagram of a user interface system for picture recognition provided by an embodiment of the disclosure.
  • FIG. 23 is a flowchart of an interactive method for image recognition through a user interface provided by an embodiment of the disclosure.
  • An electronic device provided by an embodiment of the present disclosure includes a display screen 1, a memory 2 and a processor 3, wherein:
  • the memory 2 connected to the display screen 1 and the processor 3, is configured to store computer instructions and save data associated with the display screen 1;
  • the processor 3 connected to the display screen 1 and the memory 2, is configured to execute computer instructions to make the electronic device execute:
  • a first interface 10 is displayed on the display screen.
  • the first interface 10 includes a first area A1 and a second area A2.
  • the first area A1 includes at least one first-level functional classification label 1_n (n is greater than or equal to 1)
  • the second area A2 includes multiple secondary function classification labels 1_nm (m is an integer greater than or equal to 1) and at least one tertiary function classification label 1_nmk(k Is an integer greater than or equal to 1);
  • the display screen 1 displays in the second area A2 a plurality of secondary function classification labels 1_nm and each secondary function classification label 1_n included in the selected primary function classification label 1_n.
  • At least one third-level function classification label 1_nmk included in two second-level function classification labels 1_nm (1_1 in Figure 2 is the selected first-level function classification label);
  • the display screen 1 In response to the trigger instruction of selecting at least one of the three-level function classification labels 1_nmk, the display screen 1 displays a second interface 20, as shown in FIG. 7; the content included in the second interface 20 is the same as the content included in the first interface 10. Is different.
  • the electronic device includes a display screen, a memory, and a processor, wherein a first interface is displayed on the display screen, and the first interface includes at least one first-level function classification label, multiple second-level function classification labels, and each At least one third-level function classification label included in the two second-level function classification labels; the second interface can be displayed by selecting the third-level function classification label, so as to experience the function effect corresponding to the third-level function classification label on the second interface.
  • the electronic device can be used in the first interface to select different three-level function classification labels for experience, which can provide users with practicability and has certain tool properties.
  • the primary function classification label is a summary of multiple secondary function classification labels with the same characteristics
  • the secondary function classification label is a summary of multiple secondary function classification labels with the same characteristics.
  • the three-level function classification labels correspond to different user experience functions.
  • the names of the function classification tags at all levels can be named according to the experience functions that can be realized, which is not limited here.
  • the first-level functional classification tags are computational vision, image intelligence, and human-computer interaction
  • the second-level functional classification tags corresponding to computational vision are painting recognition, OCR (Optical Character Recognition, optical character recognition), and face recognition
  • the three-level functional classification labels corresponding to picture recognition can be painting recognition, fruit recognition, food recognition, car recognition, plant recognition, animal recognition, etc.
  • the three-level functional classification labels corresponding to OCR can be business card recognition, bill recognition, and Barcode recognition, etc.
  • the three-level functional classification labels corresponding to face recognition can be facial expression recognition, face attribute recognition, acquaintance recognition, etc.
  • the secondary function classification labels corresponding to image intelligence are image enhancement, image new application, image processing, and image search; among them, the three-level function classification labels corresponding to image enhancement can be HDR (High-Dynamic Range, high dynamic range). Image) processing and image ultra-high-resolution processing, etc.; the three-level function classification labels corresponding to new image applications can be artistic QR codes, etc.; the three-level function classification labels corresponding to image processing can be image segmentation migration and magic animation, etc.; The three-level functional classification labels corresponding to the image search can be the same image search and similar image search.
  • HDR High-Dynamic Range, high dynamic range.
  • Image processing and image ultra-high-resolution processing
  • the three-level function classification labels corresponding to new image applications can be artistic QR codes, etc.
  • the three-level function classification labels corresponding to image processing can be image segmentation migration and magic animation, etc.
  • the three-level functional classification labels corresponding to the image search can be the same image search and similar image search.
  • the second-level function classification labels corresponding to human-computer interaction are natural language processing, gesture interaction, and gesture interaction, etc.; the three-level function classification labels corresponding to natural language processing can be art question answering and knowledge graph, etc.; the third level corresponding to gesture interaction
  • the functional classification label can be static gestures and dynamic gestures, etc.; the three-level functional classification label of gesture interaction can be gesture estimation, etc.
  • the first interface 10 further includes a third area A3, and the third area A3, the first area A1, and the second area A2 are along the first area A3.
  • the interface 10 is arranged in order from the top to the bottom.
  • the third area A3 is used to display still pictures or dynamic pictures.
  • the content of the picture displayed in the third area is not limited here, and can be set according to requirements.
  • the content of the picture is a picture that can display information such as corporate advertisements, news, and achievements to users.
  • the third area A3 further includes an operation label S1 for the user to input information.
  • the name of the operation tag can be named according to actual needs.
  • the name of the operation tag is cooperation consultation, which is convenient for users who are interested in seeking peace to leave contact information and cooperation information.
  • the operation label S1 can be located at any position in the third area A3, which is not limited here.
  • the processor is also configured to execute computer instructions to make the electronic device perform: in response to the start instruction of selecting the operation label for the user to input information, display the input information box on the new display interface, So that users can enter information.
  • the first interface 10 further includes an operation icon S2 for the user to input information, and the operation icon S2 is located at a fixed position on the display screen. .
  • the operation icon can be designed as various icons, such as iconic icons representing enterprises, etc., which are not limited here.
  • the processor is also configured to execute computer instructions to make the electronic device perform: in response to the start instruction of selecting the operation label for the user to input information, display the input information box on the new display interface, So that users can enter information.
  • the display screen when the display screen displays the first interface, in response to the directional movement trigger instruction in the second area along the first direction, the corresponding first-level function classification label is selected.
  • the first direction may be the horizontal direction of the display screen, that is, when the display screen displays the first interface, when the second area slides along the first direction X, the first-level function classification label may be switched.
  • the first-level function classification label may be switched.
  • the selected The classification label is changed from 1_1 to the primary function classification label 1_2, and the second area A2 simultaneously displays multiple secondary function classification labels 1_21, 1_22, 1_23 and each secondary function included in the selected primary function classification label 1_2
  • the classification label 1_2m includes at least one three-level functional classification label 1_2mk.
  • 1_21 includes 1_121 and 1_122
  • 1_22 includes 1_221 and 1_222
  • 1_23 includes 1_231 and 1_232.
  • the entire first interface moves relative to the display screen in the second direction, and
  • the position of the first area remains fixed and the second area moves in the second direction.
  • the second direction may be the vertical direction of the display screen.
  • the first interface includes the first area and the second area, as shown in FIG. 6a
  • the first area A1 and the second area A2 both move in the second direction Y relative to the display screen.
  • the first area A1 moves to the top of the first interface
  • the position of the first area A1 remains fixed
  • the second area A2 moves in the second direction.
  • the first area A1 and the second area A2 move in the second direction Y relative to the display screen, which is not limited here.
  • the first interface includes the first area, the second area, and the third area, as shown in FIG. 6b
  • the first area A1, the second area A2, and the third area A3 are relative to the display screen. All move in the second direction Y.
  • the first area A1 moves to the top of the first interface
  • the first area A1 remains fixed in position
  • the second area A2 moves in the second direction. In this way, it is convenient for the user to determine the first-level functional classification label to which the content of the currently displayed second area belongs.
  • the first area A1, the second area A2, and the third area A3 all move in the second direction Y relative to the display screen, which is not limited here.
  • the second interface 20 when the display screen displays the second interface corresponding to the selected three-level function classification label, as shown in FIG. 7, the second interface 20 includes the first display area B1 , The second display area B2, the third display area B3, and the fourth display area B4;
  • the first display area B1 includes a three-level functional experience object graph or a three-level functional experience object effect graph, such as picture 1 in FIG. 7;
  • the second display area B2 includes multiple example graphs distributed along the third direction X', such as Example Figure 1, Example Figure 2, and Example Figure 3 in 7, each of the multiple example images has a size smaller than the size of the image in the first display area;
  • the first display area B1 displays the effect image and the complete image corresponding to the selected example image, and the effect image corresponds to the function corresponding to the selected three-level function classification label ;
  • a plurality of example images simultaneously move along the third direction X'in the second display area B2;
  • the third display area B3 includes a first operation label S3 for uploading pictures
  • the fourth display area B4 includes an application scenario label S4 associated with the selected three-level function classification label.
  • the example image can be configured by the background server, so that the user can directly experience the effect of the algorithm without uploading the image.
  • the operation label S3 for uploading pictures in the third display area B3 facilitates the user to select a local picture or take a photo for upload.
  • the first display area B1, the second display area B2, the third display area B3, and the fourth display area B4 are located along the second interface 20. Arrangement in the fourth direction Y'.
  • the third direction is the horizontal direction of the display screen
  • the fourth direction is the vertical direction of the display screen
  • each label is not specifically limited, and in practical applications, the label can be named according to the function that the label needs to implement.
  • the second interface shown in FIG. 7 is described through specific embodiments.
  • the second interface 20 displayed on the display screen is as shown in Figure 8, where 3 sample images are displayed in area B2, and the selected sample image is the first image from left to right.
  • An example image, the first example image displayed in the B2 area is only a partial image, and the B1 area displayed is the complete image of the first example image, and it is the complete image after HDR processing.
  • the second interface 20 displayed on the display screen is shown in Figure 9.
  • the second display area B2 displays several style example images, and the user can swipe left and right to view more styles .
  • the second display area B2 when the second interface 20 displayed on the display screen, also includes an operation label for comparison and switching, for example, when the image is selected 7 and the comparison button shown in FIG. 8, the picture in the first display area B1 can be switched between the experience object effect picture and the original picture, and the processing effect can be intuitively compared.
  • the second interface when the display screen displays the second interface corresponding to the selected three-level function classification label, as shown in FIG. 10, the second interface further includes a fifth display area B5;
  • the fifth display area B5 displays the attribute information of the complete image.
  • the first display area B1, the second display area B2, the fifth display area B5, the third display area B3, and the fourth display area B4 are on the second interface. 20 is arranged along the fourth direction Y'. Further, the fifth display area B5 may be set below the first display area B1 and the second display area B2 to facilitate the comparison of the attribute information of the complete image and the complete image.
  • the second interface shown in FIG. 10 is described through specific embodiments.
  • the attribute information may include: category, subject, content, etc., which is not limited here.
  • a confidence level can be added to the back of each recognition result, which is not limited here.
  • the attribute information may include various text information on the business card, and the second interface 20 displayed on the display screen is as shown in FIG. 12.
  • the attribute information may include the barcode recognition result and the text recognition result, and the second interface 20 displayed on the display screen is as shown in FIG. 13.
  • the first display area B1 further includes the recognition result of the experience object graph, and the recognition result is superimposed on the three-level functional experience object graph.
  • the recognition result may include the fruit name and the confidence level
  • the second interface 20 displayed on the display screen is, for example, as shown in FIG. 14.
  • the third display area further includes a second operation label; in response to the selection of the second operation label, the attribute information is saved in the attribute information data list.
  • the third display area B3 also includes an operation label for saving attribute information to the address book. It is convenient for users to save the attribute information in the address book of the mobile phone.
  • the attribute information includes address, mobile phone number, etc. The user can select the attribute information that needs to be saved according to needs, for example, select the mobile phone number and then save the mobile phone number to the address book.
  • the second interface when the display screen displays the second interface corresponding to the selected three-level function classification label, the second interface includes a first display area, a second display area, and a third display area. Display area;
  • the first display area of the second interface of the display screen In response to the input information in the second display area, the first display area of the second interface of the display screen generates a first conversion image corresponding to the input information;
  • the third display area includes a third operation label and a fourth operation label.
  • the third operation label is used to convert the first conversion image into a second conversion image
  • the fourth operation label is used to convert the first conversion image into a third conversion image.
  • the second conversion image and the third conversion image are different.
  • the third operation label is to generate a first type of two-dimensional code S5
  • the fourth operation label is to generate a second type of two-dimensional code S6.
  • the first type of two-dimensional code is The QR code that can beautify the background (the QR code in the C1 area in Figure 16).
  • the second type of QR code is the QR code that can beautify the structure (the QR code in the C1 area in Figure 17).
  • the second conversion The operation of the image is obtained by fusing the background image (the image of two horses in Figure 16) and the QR code in Figure 15.
  • the background image is not referred to when processing the QR code in Figure 15; and
  • the two-dimensional code image in Figure 17 (equivalent to the third converted image) is in the process of fusing the background image and the two-dimensional code, and the background image is referred to when processing the two-dimensional code in Figure 15.
  • the second image in Figure 17 is obtained.
  • the distribution of black and white points in the dimension code is related to the distribution of light and dark in the background image.
  • the selected three-level function classification label as an example that can realize the function of encoding a paragraph of text into a two-dimensional code.
  • the second interface 20 includes a first display area B1, a second display area B2, and a third display area. B3;
  • the first display area B1 includes a two-dimensional code generated according to the text
  • the second display area B2 includes a text editing area for generating a QR code of the first display area B1;
  • the third display area B3 includes an operation label S5 for generating a first type of two-dimensional code and an operation label S6 for generating a second type of two-dimensional code, wherein the first type of two-dimensional code and the second type of two-dimensional code are different of.
  • the second interface 20 also has a fourth display area B4, where the fourth display area B4 includes an application scenario label S4 associated with the selected three-level function classification label.
  • the text editing area of the QR code in the second display area B2 it is convenient for users to input text, and the background server can also configure a section of default text, so that users can directly experience the function effect without input.
  • the first display area B1 further includes an operation label S7 for storing a two-dimensional code. It is convenient for users to save the generated QR code locally on the mobile phone.
  • the first type of two-dimensional code and the second type of two-dimensional code are different, which means that the two-dimensional code has different manifestations, but the information contained in the two-dimensional code can be Are the same.
  • the first type of two-dimensional code is a two-dimensional code that can beautify the background
  • the second type of two-dimensional code is a two-dimensional code that can beautify the structure, which is not limited here.
  • the electronic device in response to selecting one of the operation labels used to generate the first type of two-dimensional code or the operation labels used to generate the second type of two-dimensional code Trigger the instruction to display a third interface on the display screen, the third interface including the first functional area, the second functional area, and the third functional area distributed along the third direction;
  • the first functional area includes a QR code with a background image
  • the second functional area includes a text editing area for modifying the QR code
  • the third functional area includes operation labels for changing the background image.
  • the first type of two-dimensional code is a two-dimensional code that can beautify the background
  • the electronic device in response to selecting an operation label for generating the first type of two-dimensional code, as shown in FIG. As shown in 16, on the third interface 30:
  • the first functional area C1 includes a two-dimensional code with a background image
  • the second functional area C2 includes a text editing area for modifying the QR code
  • the third functional area C3 includes operation labels for changing the background picture.
  • the second type of two-dimensional code is a two-dimensional code that can beautify the structure
  • the electronic device in response to selecting an operation label for generating the second type of two-dimensional code, as shown in FIG.
  • the two-dimensional code with background image in the first functional area C1 is adjusted by adjusting the black and white elements in the two-dimensional code according to the light and dark areas of the background image to improve the artistic two-dimensionality.
  • the aesthetic effect of the code is a two-dimensional code that can beautify the structure.
  • the first functional area C1 further includes an operation label S8 for storing a two-dimensional code.
  • the processor is further configured to execute computer instructions to make the electronic device perform:
  • the application scenario label is selected, and the link interface corresponding to the selected application scenario label is displayed on the display screen:
  • the link interface corresponding to the application scenario label is shown in Figure 18, which introduces the application scenario of the three-level function classification label, where the application scenario includes an introduction and a detailed introduction.
  • the "Contact Us” and “Feedback” buttons click to open the corresponding display interface.
  • Figure 19 is a display interface opened by clicking the "Contact Us” button.
  • This interface provides users with a window for business contact with the company. The user fills in the company name, name, contact number, email address, specific description and other information. , And click the submit button to send your information to the backend, and wait for the company to get in touch with it. After clicking the submit button, the user will get a "submission successful" prompt and jump to the first interface.
  • Figure 20 shows the display interface opened by clicking the "Feedback” button.
  • This interface provides the user with a window to provide feedback on the algorithm.
  • the user fills in specific comments and clicks the submit button to send his information to the background.
  • After clicking the submit button The user will get the prompt "Submission is successful, thank you for your valuable comments", and jump to the first interface.
  • the present disclosure is only schematically illustrated by the above-mentioned embodiments, and is not specifically limited thereto.
  • the above-mentioned electronic devices provided by the embodiments of the present disclosure can allow users to intuitively realize various intelligent experiences. And through the first interface carousel pictures to promote corporate information to users, according to the functions of functional classification tags at all levels to show users the company’s artificial intelligence functions, users can use default pictures or load local pictures to experience various algorithm functions and effects, and users Part of the processing results can be downloaded, users can understand the application scenarios of each function, users can put forward their own feedback on each function, and can conduct business contacts with enterprises and seek cooperation.
  • embodiments of the present disclosure also provide an interface display method, which is applied to an electronic device with a display screen.
  • the interface display method includes:
  • the display screen displays a first interface, the first interface includes a first area and a second area, the first area includes at least one first-level function classification label, and the second area includes a plurality of second-level function classification labels and each second-level function At least one three-level functional classification label included in the classification label;
  • the display screen displays in the second area the multiple second-level function classification labels included in the selected first-level function classification label and each second-level function classification label. At least one three-level functional classification label included in the functional classification label;
  • the display screen In response to a trigger instruction of selecting one of the at least one three-level function classification label, the display screen displays a second interface; the content included in the second interface is different from the content included in the first interface.
  • embodiments of the present disclosure also provide a user interface system for picture recognition, which includes a plurality of first function controls, at least one second function control, and at least one third function control displayed in the user interface , Picture display control; each of the multiple first function controls presents the first picture corresponding to each first function control in the user interface, and the object in the first picture corresponding to each first function control is The presentation status in the user interface is different;
  • the second function control can be operated after selection so that the user can select and upload the second picture
  • One of the multiple first function controls can be operated after selection or after the user selects and uploads the second picture, so that the picture in the picture display control is updated to the first picture or the second picture, and the updated The attribute information of the picture will be presented on the user interface;
  • FIG. 22 illustrates the user interface system provided by the embodiment of the present disclosure.
  • the user interface can realize the function of recognizing the information in the business card uploaded by the user, and specifically includes three first ones displayed in the user interface.
  • the first picture corresponding to a functional control (the photo of the business card in Figure 22), and the presentation state of the object in the first picture corresponding to each first functional control (referring to the business card in Figure 22) in the user interface is Specifically, in Figure 22, the business card in the picture corresponding to 101 is a business card photo taken from an angle close to the lower edge of the business card, so the business card in the picture is trapezoidal (that is, the distorted state of the business card in the picture).
  • the business card in the picture corresponding to 102 is at an acute angle to the horizontal (and the slanted state of the business card in the picture); the business card in the picture corresponding to 103 is taken from a straight down angle, so the long and short sides of the business card are both It is parallel to the long and short sides of the forehead of the picture; the above three types of business cards placed and photographed can be used to recognize the information on the business card and display it in the recognition result column.
  • the second function control 201 (the upload picture control in FIG. 22) can be operated after being selected by the user to allow the user to select and upload a user-defined second picture; the specific user selection method can be to touch and click the second function control, The second function control can also be selected by mouse, gesture, voice command or other methods; the custom picture uploaded by the user can be the picture stored on the terminal device, or the picture taken by the user in real time or downloaded from the Internet or cloud server image.
  • One of the multiple first function controls can be operated after selection or after the user selects and uploads the second picture, so that the picture in the picture display control is updated to the first picture or the second picture, and the updated
  • the attribute information of the picture will be presented on the user interface; for example, in Figure 22, the selected first function control is 101, and the picture corresponding to 101 is displayed in the picture display control 401. When 101 is selected, it is recognized The result will also show the relevant information in the business card corresponding to 101, including name, position, company, address, email address, and mobile phone number. If the user uploads a custom picture, the picture display control 401 displays the custom picture uploaded by the user at this time.
  • the recognition result is the recognition result of the relevant information in the user-defined picture.
  • the relevant information here can be included in the custom picture. Text information, letter information, number information, etc.
  • the attribute information of the updated picture for example, the relevant information of the business card corresponding to 101 in FIG. 22
  • the third function control 301 it is possible to operate so that the attribute information of the updated picture is stored in the first Data list. For example, you can select 301 by clicking and save the relevant information in the business card in the relevant database.
  • the function of the control 301 in Figure 22 can save the phone number to the user's address book.
  • the first data list here is User address book list. Of course, the user can also set the function of the control 301 according to requirements.
  • embodiments of the present disclosure also provide an interactive method for image recognition through a user interface, as shown in FIG. 23, including:
  • S201 Provide a plurality of first function controls, at least one second function control, at least one third function control, and a picture display control in the user interface;
  • Each of the plurality of first functional controls presents a first picture corresponding to each first functional control, and the presentation state of an object in the corresponding first picture in each first functional control is different;
  • the attribute information of the updated picture includes at least one of text information, letter information, and numeric information.
  • the attribute information of the picture may include name, position, company, address, and email address.
  • the attribute information of the updated picture is stored in the first data
  • the list includes:
  • the selected attribute information is stored in the first data list.
  • each of the name, position, company, address, and mailbox in the attribute information of the picture can be individually selected and stored in the first data list.
  • the presentation state includes at least one of the tilt state of the object and the distortion state of the object.
  • the presentation state of the object in each first picture is different.
  • the business card in the picture corresponding to 101 is in a distorted state
  • the business card in the picture corresponding to 102 is in an oblique state.
  • the electronic devices, computer storage media, computer program products, or chips provided in the embodiments of the present application are all used to execute the corresponding methods provided above. Therefore, the beneficial effects that can be achieved can refer to the above provided The beneficial effects of the corresponding method will not be repeated here.
  • the disclosed device and method can be implemented in other ways.
  • the device embodiments described above are only illustrative, for example, the division of modules or units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another device, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or multiple physical units, that is, they may be located in one place, or they may be distributed to multiple different places. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be realized in the form of hardware or software function unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solutions of the embodiments of the present application are essentially or the part that contributes to the prior art, or all or part of the technical solutions can be embodied in the form of a software product, and the software product is stored in a storage medium. It includes several instructions to make a device (which may be a single-chip microcomputer, a chip, etc.) or a processor (processor) execute all or part of the steps of the methods of the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (read only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种图片识别的用户界面系统、电子设备及交互方法,包括显示屏、存储器和处理器,其中,显示屏上显示第一界面,在第一界面包括至少一个一级功能分类标签,多个二级功能分类标签和每个二级功能分类标签所包括的至少一个三级功能分类标签;通过选定三级功能分类标签可以显示第二界面,以在第二界面体验三级功能分类标签对应的功能效果。该电子设备可以使用于在第一界面中就可以选择不同的三级功能分类标签进行体验,能够为用户提供实用性,具有一定的工具性质。

Description

图片识别的用户界面系统、电子设备及交互方法 技术领域
本公开涉及终端技术领域,尤指一种图片识别的用户界面系统、电子设备及交互方法。
背景技术
随着终端技术的进步,手机等电子设备的功能越来越丰富。为了满足用户不同的体验需求,手机中需要安装各种各样的应用程序,例如,美图、图像识别、人脸识别等,当用户使用时,分别打开对应的应用程序。但是当用户的体验功需求较多时,需要用户下载安装较多的应用程序。
发明内容
本公开实施例提供了一种通过用户界面进行图片识别的交互方法,其中,包括:
在所述用户界面中提供多个第一功能控件、至少一个第二功能控件、至少一个第三功能控件和图片展示控件;
多个第一功能控件中的每一个呈现的是每一个第一功能控件对应的第一图片,每个第一功能控件中对应的第一图片中的对象的呈现状态是不同的;
响应于对所述第二功能控件的选定指令,接收用户选择并上传的第二图片;
在接收对多个第一功能控件中的一个的选择指示或者接收用户选择并上传第二图片的指令后,所述图片展示控件中的图片被更新为第一图片或第二图片,同时更新后的图片的属性信息会在所述用户界面上进行呈现;
在呈现所述更新后的图片的属性信息后并且接收对所述第三功能控件的选择指示后,所述更新后的图片的属性信息被存储到第一数据列表中。
可选地,在本公开实施例提供的交互方法中,所述更新后的图片的属性 信息包括文字信息、字母信息和数字信息中的至少一个。
可选地,在本公开实施例提供的交互方法中,在呈现所述更新后的图片的属性信息后并且接收对所述第三功能控件的选择指示后,所述更新后的图片的属性信息被存储到第一数据列表中,包括:
当所述属性信息包含多个时,在接收到选定的属性信息和接收到对所述第三功能控件的选择指示后,所述选定的属性信息被存储到所述第一数据列表中。
可选地,在本公开实施例提供的交互方法中,所述呈现状态包含:对象的倾斜状态和对象的畸变状态中的至少一个。
相应地,本公开实施例还提供了一种图片识别的用户界面系统,其中,包括在所述用户界面中展现的多个第一功能控件、至少一个第二功能控件和至少一个第三功能控件、图片展示控件;多个第一功能控件中的每一个在所述用户界面中呈现的是每一个第一功能控件对应的第一图片,每个第一功能控件中对应的第一图片中的对象在所述用户界面中的呈现状态是不同的;
所述第二功能控件在选择后可操作以使用户选择并上传第二图片;
所述多个第一功能控件中的一个在选择后或者在用户选择并上传第二图片的操作后可操作以使所述图片展示控件中的图片被更新为第一图片或第二图片,同时使更新后的图片的属性信息会在所述用户界面上进行呈现;
在呈现所述更新后的图片的属性信息后并且对所述第三功能控件的选择后可操作以使所述更新后的图片的属性信息被存储到第一数据列表中。
相应地,本公开实施例还提供了一种电子设备,包括显示屏、存储器和处理器,其中:
所述存储器,与所述显示屏和处理器连接,被配置为存储计算机指令和保存与所述显示屏关联的数据;
所述处理器,与所述显示屏和存储器连接,被配置为执行所述计算机指令以使得所述电子设备执行:
在所述显示屏上显示第一界面,所述第一界面包括第一区域和第二区域, 所述第一区域包括至少一个一级功能分类标签,所述第二区域包括多个二级功能分类标签和每个二级功能分类标签所包括的至少一个三级功能分类标签;
响应于选定所述至少一个一级功能分类标签中的一个的触发指令,所述显示屏在所述第二区域显示选定的所述一级功能分类标签所包括的多个二级功能分类标签和每个二级功能分类标签所包括的至少一个三级功能分类标签;
响应于选定所述至少一个三级功能分类标签中的一个的触发指令,所述显示屏显示第二界面;所述第二界面包括的内容与所述第一界面包括的内容是不同的。
可选地,在本公开实施例提供的电子设备中,所述第一界面还包括第三区域,且所述第三区域、所述第一区域和所述第二区域沿所述第一界面顶端向底端方向依次排列。
可选地,在本公开实施例提供的电子设备中,当所述显示屏显示所述第一界面时,响应于在所述第二区域沿第一方向的定向移动触发指令,对应的所述一级功能分类标签被选定。
可选地,在本公开实施例提供的电子设备中,当所述显示屏显示所述第一界面时,响应于沿第二方向的定向移动触发指令,整个所述第一界面相对所述显示屏沿第二方向移动,并且当所述第一界面的第一区域移动到第一界面的顶端时,所述第一区域保持位置固定不变,所述第二区域沿所述第二方向移动。
可选地,在本公开实施例提供的电子设备中,当所述显示屏显示所选定的三级功能分类标签对应的第二界面时,所述第二界面包括第一显示区域、第二显示区域、第三显示区域和第四显示区域;
所述第一显示区域包括三级功能体验对象图或三级功能体验对象效果图;所述第二显示区域包括沿第三方向分布的多个示例图,所述多个示例图中的每一个的尺寸都小于所述第一显示区域内图片的尺寸;
响应于对所述多个示例图中的一个的选定指令,所述第一显示区域显示选定的示例图对应的效果图和完整图,所述效果图对应于所述所选定的三级 功能分类标签对应的功能;
响应于沿所述第三方向的定向移动触发指令,所述多个示例图在所述第二显示区域同时沿所述第三方向移动;
所述第三显示区域包括用于上传图片的第一操作标签;
所述第四显示区域包括与所选定的三级功能分类标签相关联的应用场景标签。
可选地,在本公开实施例提供的电子设备中,所述第一显示区域、第二显示区域、第三显示区域和第四显示区域在第二界面沿第四方向排布,所述第三方向为所述显示屏上的水平方向,所述第四方向为所述显示屏上的垂直方向。
可选地,在本公开实施例提供的电子设备中,当所述显示屏显示所选定的三级功能分类标签对应的第二界面时,所述第二界面还包括第五显示区域;
所述第五显示区域显示所述完整图的属性信息。
可选地,在本公开实施例提供的电子设备中,所述第三显示区域还包括第二操作标签;响应于对所述第二操作标签的选定,所述属性信息被保存到属性信息数据列表中。
可选地,在本公开实施例提供的电子设备中,当所述显示屏显示所选定的三级功能分类标签对应的第二界面时,所述第二界面包括第一显示区域、第二显示区域和第三显示区域;
响应于在所述第二显示区域的输入信息,所述显示屏的第二界面的第一显示区域生成对应所述输入信息的第一转换图像;
所述第三显示区域包括第三操作标签和第四操作标签,所述第三操作标签用于将所述第一转换图像转换为第二转换图像,所述第四操作标签用于将所述第一转换图像转换为第三转换图像,所述第二转换图像和第三转换图像是不同的。
相应地,本公开实施例还提供了一种电子设备的交互方法,其中,所述电子设备包含显示屏,所述方法包括:
控制所述显示屏显示第一界面,所述第一界面包括第一区域和第二区域,所述第一区域包括至少一个一级功能分类标签,所述第二区域包括多个二级功能分类标签和每个二级功能分类标签所包括的至少一个三级功能分类标签;
响应于选定所述至少一个一级功能分类标签中的一个的触发指令,控制所述显示屏在所述第二区域显示选定的所述一级功能分类标签所包括的多个二级功能分类标签和每个二级功能分类标签所包括的至少一个三级功能分类标签;
响应于选定所述至少一个三级功能分类标签中的一个的触发指令,控制所述显示屏显示第二界面;所述第二界面包括的内容与所述第一界面包括的内容是不同的。
附图说明
图1为本公开实施例提供的电子设备的结构示意图;
图2为本公开实施例提供的电子设备进行显示时的界面示意图之一;
图3为本公开实施例提供的电子设备进行显示时的界面示意图之二;
图4为本公开实施例提供的电子设备进行显示时的界面示意图之三;
图5为本公开实施例提供的电子设备进行显示时的界面示意图之四;
图6a为本公开实施例提供的电子设备进行显示时的界面变化示意图之一;
图6b为本公开实施例提供的电子设备进行显示时的界面变化示意图之二;
图7为本公开实施例提供的电子设备进行显示时的界面示意图之五;
图8为本公开实施例提供的电子设备进行显示时的界面示意图之六;
图9为本公开实施例提供的电子设备进行显示时的界面示意图之七;
图10为本公开实施例提供的电子设备进行显示时的界面示意图之八;
图11为本公开实施例提供的电子设备进行显示时的界面示意图之九;
图12为本公开实施例提供的电子设备进行显示时的界面示意图之十;
图13为本公开实施例提供的电子设备进行显示时的界面示意图之十一;
图14为本公开实施例提供的电子设备进行显示时的界面示意图之十二;
图15为本公开实施例提供的电子设备进行显示时的界面示意图之十三;
图16为本公开实施例提供的电子设备进行显示时的界面示意图之十四;
图17为本公开实施例提供的电子设备进行显示时的界面示意图之十五;
图18为本公开实施例提供的电子设备进行显示时的界面示意图之十六;
图19为本公开实施例提供的电子设备进行显示时的界面示意图之十七;
图20为本公开实施例提供的电子设备进行显示时的界面示意图之十八;
图21为本公开实施例提供的界面显示方法的流程示意图;
图22为本公开实施例提供的一种图片识别的用户界面系统的示意图;
图23为本公开实施例提供的一种通过用户界面进行图片识别的交互方法的流程图。
具体实施方式
为使本公开的上述目的、特征和优点能够更为明显易懂,下面将结合附图和实施例对本公开做进一步说明。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的实施方式;相反,提供这些实施方式使得本公开更全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。在图中相同的附图标记表示相同或类似的结构,因而将省略对它们的重复描述。本公开中所描述的表达位置与方向的词,均是以附图为例进行的说明,但根据需要也可以做出改变,所做改变均包含在本公开保护范围内。本公开的附图仅用于示意相对位置关系不代表真实比例。
需要说明的是,在以下描述中阐述了具体细节以便于充分理解本公开。但是本公开能够以多种不同于在此描述的其它方式来实施,本领域技术人员可以在不违背本公开内涵的情况下做类似推广。因此本公开不受下面公开的具体实施方式的限制。说明书后续描述为实施本申请的较佳实施方式,然所述描述乃以说明本申请的一般原则为目的,并非用以限定本申请的范围。本申请的保护范围当视所附权利要求所界定者为准。
下面结合附图,对本公开实施例提供的图片识别的用户界面系统、电子 设备及交互方法进行具体说明。
本公开实施例提供的一种电子设备,如图1所示,包括显示屏1、存储器2和处理器3,其中:
存储器2,与显示屏1和处理器3连接,被配置为存储计算机指令和保存与显示屏1关联的数据;
处理器3,与显示屏1和存储器2连接,被配置为执行计算机指令以使得电子设备执行:
显示屏上显示第一界面10,如图2所示,在第一界面10包括第一区域A1和第二区域A2,第一区域A1包括至少一个一级功能分类标签1_n(n为大于或等于1的整数),第二区域A2包括多个二级功能分类标签1_nm(m为大于或等于1的整数)和每个二级功能分类标签1_nm所包括的至少一个三级功能分类标签1_nmk(k为大于或等于1的整数);
响应于选定至少一个一级功能分类标签1_n中的一个的触发指令,显示屏1在第二区域A2显示选定的一级功能分类标签1_n所包括的多个二级功能分类标签1_nm和每个二级功能分类标签1_nm所包括的至少一个三级功能分类标签1_nmk(图2中1_1为选定的一级功能分类标签);
响应于选定至少一个三级功能分类标签1_nmk中的一个的触发指令,显示屏1显示第二界面20,具体如图7所示;第二界面20包括的内容与第一界面10包括的内容是不同的。
本公开实施例提供的电子设备,包括显示屏、存储器和处理器,其中,显示屏上显示第一界面,在第一界面包括至少一个一级功能分类标签,多个二级功能分类标签和每个二级功能分类标签所包括的至少一个三级功能分类标签;通过选定三级功能分类标签可以显示第二界面,以在第二界面体验三级功能分类标签对应的功能效果。该电子设备可以使用于在第一界面中就可以选择不同的三级功能分类标签进行体验,能够为用户提供实用性,具有一定的工具性质。
在具体实施时,在本公开实施例提供的电子设备中,一级功能分类标签 是对多个具有相同特征的二级功能分类标签的概括,二级功能分类标签是对多个具有相同特征的三级功能分类标签的概括,三级功能分类标签分别对应不同的用户体验功能。在具体实施时,可以根据能够实现的体验功能来命名各级功能分类标签的名称,在此不作限定。举例说明,例如一级功能分类标签分别为计算视觉、图像智能、人机交互;计算视觉对应的二级功能分类标签分别为画作识别、OCR(Optical Character Recognition,光学字符识别)、人脸识别;其中,图片识别对应的三级功能分类标签可以分别为画作识别、水果识别、美食识别、汽车识别、植物识别、动物识别等;OCR对应的三级功能分类标签可以分别为名片识别、票据识别、条形码识别等;人脸识别对应的三级功能分类标签可以分别为表情识别、人脸属性识别、相识度识别等。图像智能对应的二级功能分类标签分别为图像增强、图像新应用、图像处理和以图搜图;其中,图像增强对应的三级功能分类标签可以分别为HDR(High-Dynamic Range,高动态范围图像)处理和图像超高分辨率处理等;图像新应用对应的三级功能分类标签可以艺术二维码等;图像处理对应的三级功能分类标签可以分别为图像分割迁移和魔法动图等;以图搜图对应的三级功能分类标签可以分别为相同图搜索和相似图搜等。人机交互对应的二级功能分类标签分别为自然语言处理、手势交互和姿态交互等;其中自然语言处理对应的三级功能分类标签可以分别为艺术问答和知识图谱等;手势交互对应的三级功能分类标签可以静态手势和动态手势等;姿态交互的三级功能分类标签可以为姿态估计等。
可选地,在本公开实施例提供的电子设备中,如图3所示,第一界面10还包括第三区域A3,且第三区域A3、第一区域A1和第二区域A2沿第一界面10顶端向底端方向依次排列。
在具体实施时,第三区域A3用于显示静态图片或者动态图片。
在具体实施时,第三区域显示的图片的内容在此不作限定,可以根据需求进行设定,例如图片的内容为可向用户展示企业的广告、新闻、成果等信息的图片。
可选地,在本公开实施例提供的电子设备中,如图4所示,第三区域A3还包括用于用户输入信息的操作标签S1。在具体实施,可以根据实际需求命名该操作标签的名字,例如该操作标签的名称为合作咨询,方便有意于寻求和的用户留下联系方式以及合作信息等。
具体地,操作标签S1可以位于第三区域A3的任何位置,在此不作限定。
进一步地,在具体实施时,处理器,还被配置为执行计算机指令以使得电子设备进行:响应于选定用于用户输入信息的操作标签的出发指令,在新的显示界面显示输入信息框,以便用户输入信息。
或者,可选地,在本公开实施例提供的电子设备中,如图5所示,第一界面10还包括用于用户输入信息的操作图标S2,且该操作图标S2位于显示屏的固定位置。在显示屏显示第一界面时,操作图标S2相对显示屏屏幕的位置是固定的。在具体实施时,该操作图标可以设计成各种图标,例如代表企业的标志性图标等,在此不作限定。
进一步地,在具体实施时,处理器,还被配置为执行计算机指令以使得电子设备进行:响应于选定用于用户输入信息的操作标签的出发指令,在新的显示界面显示输入信息框,以便用户输入信息。
可选地,在本公开实施例提供的电子设备中,当显示屏显示第一界面,响应于在第二区域沿第一方向的定向移动触发指令,对应的一级功能分类标签被选定。
在具体实施时,第一方向可以为显示屏的水平方向,即当显示屏显示第一界面时,在第二区域沿第一方向X的定向滑动时,可以切换一级功能分类标签。例如图3所示,如果初始状态被选定的是以及功能分类标签1_1,此时用户在第二区域A2沿第一方向X由左向右滑动时,如图3所示,被选定的分类标签由1_1变更为一级功能分类标签1_2,且第二区域A2同时显示选定的一级功能分类标签1_2所包括的多个二级功能分类标签1_21、1_22、1_23和每个二级功能分类标签1_2m所包括的至少一个三级功能分类标签1_2mk,具体的,1_21包含1_121和1_122;1_22包含1_221和1_222;1_23包含1_231 和1_232。
可选地,在本公开实施例提供的电子设备中,当显示屏显示第一界面时,响应于沿第二方向的定向移动触发指令,整个第一界面相对显示屏沿第二方向移动,并且当第一界面的第一区域移动到第一界面的顶端时,第一区域保持位置固定不变,第二区域沿第二方向移动。
在具体实施时,第二方向可以为显示屏的垂直方向。当第一界面包括第一区域和第二区域时,如图6a所示,沿第二方向Y定向滑动时,第一区域A1和第二区域A2相对显示屏均沿第二方向Y移动,当第一区域A1移动到第一界面的顶端时,第一区域A1保持位置固定不变,第二区域A2沿第二方向移动。这样方便用户确定当前所显示的第二区域的内容所属的一级功能分类标签。或者,沿第二方向Y定向滑动时,第一区域A1和第二区域A2相对显示屏均沿第二方向Y移动,在此不作限定。
当第一界面包括第一区域、第二区域和第三区域时,如图6b所示,沿第二方向Y定向滑动时,第一区域A1、第二区域A2和第三区域A3相对显示屏均沿第二方向Y移动,当第一区域A1移动到第一界面的顶端时,第一区域A1保持位置固定不变,第二区域A2沿第二方向移动。这样方便用户确定当前所显示的第二区域的内容所属的一级功能分类标签。或者,沿第二方向Y定向滑动时,第一区域A1、第二区域A2和第三区域A3相对显示屏均沿第二方向Y移动,在此不作限定。
可选地,在本公开实施例提供的电子设备中,当显示屏显示选定的三级功能分类标签对应的第二界面时,如图7所示,第二界面20包括第一显示区域B1、第二显示区域B2、第三显示区域B3和第四显示区域B4;
第一显示区域B1包括三级功能体验对象图或三级功能体验对象效果图,例如图7中的图片一;第二显示区域B2包括沿第三方向X’分布的多个示例图,例如图7中的示例图一、示例图二和示例图三,多个示例图中的每一个的尺寸都小于第一显示区域内图片的尺寸;
响应于对多个示例图中的一个的选定指令,第一显示区域B1显示选定的 示例图对应的效果图和完整图,效果图对应于所选定的三级功能分类标签对应的功能;
响应于沿第三方向X’的定向移动触发指令,多个示例图在第二显示区域B2同时沿第三方向X’移动;
第三显示区域B3包括用于上传图片的第一操作标签S3;
第四显示区域B4包括与所选定的三级功能分类标签相关联的应用场景标签S4。
具体地,在本公开实施例提供的电子设备中,示例图可以由后台服务器配置,方便用户在无需上传图片的情况下直接体验算法效果。
在具体实施时,第三显示区域B3中用于上传图片的操作标签S3方便用户选择本地图片或拍照上传。
可选地,在本公开实施例提供的电子设备中,如图7所示,第一显示区域B1、第二显示区域B2和第三显示区域B3、第四显示区域B4在第二界面20沿第四方向Y’排布。
可选地,在本公开实施例提供的电子设备中,第三方向为显示屏的水平方向,第四方向为显示屏的垂直方向。
进一步地,在具体实施例中,对各标签的命名不作具体限定,在实际应用中可以根据该标签需要实现的功能进行命名。
具体地,通过具体实施例说明图7所示的第二界面。例如当选定三级功能分类标签为HDR时,显示屏显示的第二界面20如图8所示,其中B2区域中显示的为3张示例图,选中的示例图为从左往右的第一张示例图,B2区域中显示的第一张示例图只是局部图像,而B1区域中显示的是第一张示例图的完整图,并且是经过HDR处理后的完整图。当选定三级功能分类标签为图像风格迁移时,显示屏显示的第二界面20如图9所示,第二显示区域B2展示若干张风格的示例图,用户可通过左右滑动查看更多风格。
在具体实施时,在本公开实施中,如图8和图9所示,显示屏显示的第二界面20时,在第二显示区域B2还包括进行对比切换的操作标签,例如当 选定图7和图8所示的对比按钮时,第一显示区域B1的图片可以在体验对象效果图和原图之间进行切换,直观比较处理效果。
可选地,在本公开实施例提供的电子设备中,当显示屏显示所选定的三级功能分类标签对应的第二界面时,如图10所示,第二界面还包括第五显示区域B5;
第五显示区域B5显示完整图的属性信息。
具体地,在本公开一些实施例中,如图10所示,第一显示区域B1、第二显示区域B2、第五显示区域B5、第三显示区域B3和第四显示区域B4在第二界面20沿第四方向Y’排布。进一步地,可以将第五显示区域B5设置在第一显示区域B1和第二显示区域B2的下方,以方便对完整图和完整图的属性信息进行对比。
具体地,通过具体实施例说明图10所示的第二界面。例如当选定三级功能分类标签为画作识别时,属性信息可以包括:类别、题材、内容等,在此不作限定。例如图11所示的第二界面20。进一步地,在每一个识别结果的后边还可以加上置信度,在此不作限定。
当选定三级功能分类标签为名片识别时,属性信息可以包括名片上的各种文字信息,显示屏显示的第二界面20如图12所示。
当选定三级功能分类标签为条形码OCR时,属性信息可以包括条形码识别结果和文本识别结果,显示屏显示的第二界面20如图13所示。
或者,可选地,在本公开实施例提供的电子设备中,如图14所示,第一显示区域B1还包括体验对象图的识别结果,且识别结果叠加于三级功能体验对象图上。
具体地,例如当选定三级功能分类标签为水果识别时,识别结果可以包括水果名称和置信度,显示屏显示的第二界面20例如图14所示。
可选地,在本公开实施例提供的电子设备中,第三显示区域还包括第二操作标签;响应于对第二操作标签的选定,属性信息被保存到属性信息数据列表中。例如图12所示的,当选定三级功能分类标签为名片识别时,第三显 示区域B3还包括用于将属性信息保存至通讯录的操作标签。方便用户将属性信息保存到手机的通讯录里,属性信息包含地址、手机号等,用户可以根据需求选择需要保存的属性信息,例如可选择手机号进而将手机号保存到通讯录。
可选地,在本公开实施例提供的电子设备,当显示屏显示所选定的三级功能分类标签对应的第二界面时,第二界面包括第一显示区域、第二显示区域和第三显示区域;
响应于在第二显示区域的输入信息,显示屏的第二界面的第一显示区域生成对应输入信息的第一转换图像;
第三显示区域包括第三操作标签和第四操作标签,第三操作标签用于将第一转换图像转换为第二转换图像,第四操作标签用于将第一转换图像转换为第三转换图像,第二转换图像和第三转换图像是不同的。具体的,如图15所示,所述第三操作标签为生成第一类二维码S5,所述第四操作标签为生成第二类二维码S6,例如,第一类二维码为可以美化背景的二维码(图16中C1区域的二维码),第二类二维码为可以美化结构的二维码(图17中C1区域的二维码),此时第二转换图像的操作为将背景图(图16中为两匹马的图像)和图15中的二维码进行融合得到,在融合过程中在处理图15的二维码时并不参考背景图;而图17中的二维码图像(相当于第三转换图像)在将背景图和二维码融合过程中,在处理图15的二维码时参考了背景图,此时得到的图17的二维码中黑白点的分布是与背景图中的亮暗分布是有关联的。
具体地,以所选定的三级功能分类标签能够实现将一段文本编码成二维码为功能为例。具体地,当显示屏显示选定的三级功能分类标签对应的第二界面20时,如图15所示,第二界面20包括第一显示区域B1、第二显示区域B2和第三显示区域B3;
第一显示区域B1包括根据文本生成的二维码;
第二显示区域B2包括用于生成第一显示区域B1的二维码的文本编辑区;
第三显示区域B3包括用于生成第一类二维码的操作标签S5和用于生成 第二类二维码的操作标签S6,其中第一类二维码和第二类二维码是不同的。
具体地,如图15所示,第二界面20还第四显示区域B4,其中,第四显示区域B4包括与选定的三级功能分类标签关联的应用场景标签S4。
在具体实施时,第二显示区域B2的二维码的文本编辑区;方便用户输入文本,后台服务器也可以配置一段默认文本,方便用户在无需输入的情况下直接体验功能效果。
可选地,在本公开实施例提供的电子设备中,如图15所示第一显示区域B1还包括用于保存二维码的操作标签S7。方便用户将生成的二维码保存到手机本地。
在具体实施时,在本公开提供的实施例中,第一类二维码和第二类二维码是不同的指的是二维码的表现形式不同,但是二维码中包含的信息可以是相同的。例如,第一类二维码为可以美化背景的二维码,第二类二维码为可以美化结构的二维码,在此不作限定。
可选地,在本公开实施例提供的电子设备中,响应于选定用于生成第一类二维码的操作标签或用于生成第二类二维码的操作标签中的一个操作标签的触发指令,在显示屏显示第三界面,第三界面包括沿第三方向分布的第一功能区域、第二功能区域、第三功能区域;
第一功能区域包括具有背景图像的二维码;
第二功能区域包括用于修改二维码的文本编辑区;
第三功能区域包括用于更改背景图片的操作标签。
具体地,当第一类二维码为可以美化背景的二维码时,在本公开实施例提供的电子设备中,响应于选定用于生成第一类二维码的操作标签,如图16所示,在第三界面30:
第一功能区域C1包括具有背景图像的二维码;
第二功能区域C2包括用于修改二维码的文本编辑区;
第三功能区域C3包括用于更改背景图片的操作标签。
进一步地,当第二类二维码为可以美化结构的二维码时,在本公开实施 例提供的电子设备中,响应于选定用于生成第二类二维码的操作标签,如图17所示,在第三界面30,第一功能区域C1中具有背景图像的二维码是使二维码中的黑、白元素根据背景图片的明暗区域进行调整后的,以提高艺术二维码的美观效果。
可选地,在本公开实施例提供的电子设备中,如图16和图17所示,第一功能区域C1还包括用于保存二维码的操作标签S8。
可选地,在本公开实施例提供的电子设备中,处理器,还被配置为执行计算机指令以使得电子设备进行:
响应触发指令选定其中应用场景标签,通过显示屏在与选定的应用场景标签对应的链接界面显示:
有关应用场景标签关联的三级功能分类标签的介绍、便于用户提供联系方式的操作标签以及用户进行意见反馈的操作标签。
具体地,以三级功能分类标签为画作识别为例,应用场景标签对应的链接界面如图18所示,介绍了该三级功能分类标签的应用场景,其中应用场景包含了简介及详细介绍。页面底部为“联系我们”和“意见反馈”按钮,点击可打开对应的显示界面。
具体地,例如图19为点击“联系我们”按钮所打开的显示界面,该界面为用户提供了与企业进行业务联系的窗口,用户通过填写公司名称、姓名、联系电话、邮箱、具体描述等信息,并点击提交按钮将自己的信息发送至后台,等待企业与之取得联系,点击提交按钮后,用户会得到“提交成功”的提示,并跳转至第一界面下。
例如图20为点击“意见反馈”按钮所打开的显示界面,该界面向用户提供对算法提出反馈信息的窗口,用户通过填写具体意见并击提交按钮将自己的信息发送至后台,点击提交按钮后,用户会得到“提交成功,感谢您的宝贵意见”的提示,并跳转至第一界面下。
具体地,本公开仅是以上述几个实施例进行示意说明,具体不限于此。
本公开实施例提供的上述电子设备,能够让用户直观实现各种智能体验。 并且通过第一界面轮播图片向用户宣传企业信息,根据各级功能分类标签的功能向用户展示企业的人工智能功能,用户可以利用默认图片或加载本地图片体验各种算法功能和效果,且用户可以下载部分处理结果,用户可以了解各功能的应用场景,用户可以对各功能提出自己的反馈意见,并能够与企业进行商务联系,寻求合作。
基于同一发明构思,本公开实施例还提供了一种界面显示方法,应用于具有显示屏的电子设备,如图21所示,该界面显示方法包括:
S101、显示屏显示第一界面,第一界面包括第一区域和第二区域,第一区域包括至少一个一级功能分类标签,第二区域包括多个二级功能分类标签和每个二级功能分类标签所包括的至少一个三级功能分类标签;
S102、响应于选定至少一个一级功能分类标签中的一个的触发指令,显示屏在第二区域显示选定的一级功能分类标签所包括的多个二级功能分类标签和每个二级功能分类标签所包括的至少一个三级功能分类标签;
S103、响应于选定至少一个三级功能分类标签中的一个的触发指令,显示屏显示第二界面;第二界面包括的内容与第一界面包括的内容是不同的。
基于同一发明构思,本公开实施例还提供了一种图片识别的用户界面系统,其中,包括在用户界面中展现的多个第一功能控件、至少一个第二功能控件和至少一个第三功能控件、图片展示控件;多个第一功能控件中的每一个在用户界面中呈现的是每一个第一功能控件对应的第一图片,每个第一功能控件中对应的第一图片中的对象在用户界面中的呈现状态是不同的;
第二功能控件在选择后可操作以使用户选择并上传第二图片;
多个第一功能控件中的一个在选择后或者在用户选择并上传第二图片的操作后可操作以使图片展示控件中的图片被更新为第一图片或第二图片,同时使更新后的图片的属性信息会在用户界面上进行呈现;
在呈现更新后的图片的属性信息后并且对第三功能控件的选择后可操作以使更新后的图片的属性信息被存储到第一数据列表中。
具体地,以图22为例,说明本公开实施例提供的用户界面系统,该用户 界面可实现对用户上传的名片中的信息进行识别的功能,具体包括在用户界面中展现的三个第一功能控件101、102、103、一个第二功能控件201和一个第三功能控件301、图片展示控件401;三个第一功能控件101-103中的每一个在用户界面中呈现的是每一个第一功能控件对应的第一图片(图22中为名片的照片),每个第一功能控件中对应的第一图片中的对象(在图22中指的是名片)在用户界面中的呈现状态是不同的,具体的在图22中,101对应图片中的名片是靠近名片的下边缘的角度俯拍的名片照片,因此该图片中的名片是呈梯形的(即图片中名片的畸变状态),而102对应的图片中的名片是与水平方向成锐角的角度(及图片中名片的倾斜状态);103对应的图片中名片是摆正后俯视角度拍摄的,因此名片的长边和短边都与图片额长边和短边是平行的;以上三种放置拍摄的名片都可以通过该系统将名片上的信息识别出来并在识别结果栏中进行显示。
第二功能控件201(图22中是上传图片控件)在被用户选择后可操作以使用户选择并上传用户自定义的第二图片;具体的用户选择方式可以是触控点击第二功能控件,也可以使鼠标、手势、语音指令或其他方式选择第二功能控件;用户上传的自定义图片可以是终端设备上存储的图片,也可以是用户实时拍摄的图片或者从互联网、云服务器上下载的图片。
多个第一功能控件中的一个在选择后或者在用户选择并上传第二图片的操作后可操作以使图片展示控件中的图片被更新为第一图片或第二图片,同时使更新后的图片的属性信息会在用户界面上进行呈现;例如图22中,选定的第一功能控件为101,此时图片展示控件401中显示的是101对应的图片,在选定101的同时,识别结果中也会呈现101对应的名片中的相关信息,具体包括姓名、职位、公司、地址、邮箱、手机号。如果用户上传的是自定义图片,此时图片展示控件401中显示的就是用户上传的自定义图片,识别结果就是用户自定义图片中的相关信息识别结果,此处相关信息可以包含自定义图片中的文字信息、字母信息、数字信息等。
在呈现更新后的图片的属性信息(例如图22中的101对应的名片的相关 信息)后并且对第三功能控件301的选择后可操作以使更新后的图片的属性信息被存储到第一数据列表中。例如可通过点击方式选择301,此时可将名片中相关信息保存到相关数据库中,图22中控件301的功能可实现将手机号保存到用户的通讯录中,此处的第一数据列表就是用户通讯录列表。当然,用户也可以根据需求设定控件301的功能。
基于同一发明构思,本公开实施例还提供了一种通过用户界面进行图片识别的交互方法,如图23所示,包括:
S201、在用户界面中提供多个第一功能控件、至少一个第二功能控件、至少一个第三功能控件和图片展示控件;
S202、多个第一功能控件中的每一个呈现的是每一个第一功能控件对应的第一图片,每个第一功能控件中对应的第一图片中的对象的呈现状态是不同的;
S203、响应于对第二功能控件的选定指令,用户选择并上传第二图片;
S204、在接收对多个第一功能控件中的一个的选择指示或者接收用户选择并上传第二图片的指令后,图片展示控件中的图片被更新为第一图片或第二图片,同时更新后的图片的属性信息会在用户界面上进行呈现;
S205、在呈现更新后的图片的属性信息后并且接收对第三功能控件的选择指示后,更新后的图片的属性信息被存储到第一数据列表中。
可选地,在本公开实施例提供的交互方法中,更新后的图片的属性信息包括文字信息、字母信息和数字信息中的至少一个。
具体地,以图22为例,当图片为名片时,图片的属性信息可以包括姓名、职位、公司、地址和邮箱。
可选地,在本公开实施例提供的交互方法中,在呈现更新后的图片的属性信息后并且接收对第三功能控件的选择指示后,更新后的图片的属性信息被存储到第一数据列表中,包括:
当属性信息包含多个时,在接收到选定的属性信息和接收到对第三功能控件的选择指示后,选定的属性信息被存储到第一数据列表中。
具体地,以图22为例,图片的属性信息中姓名、职位、公司、地址和邮箱中每一项都可以单独进行选择被存储到第一数据列表中。
可选地,在本公开实施例提供的交互方法中,呈现状态包含:对象的倾斜状态和对象的畸变状态中的至少一个。
具体地,以图22为例,每一个第一图片中对象的呈现状态是不同的,101对应的图片中的名片是畸变状态,102对应图片中的名片是倾斜状态。
具体地,本申请实施例提供的电子设备、计算机存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上实施方式的描述,所属领域的技术人员可以了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其他的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其他的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单 元的形式实现。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (15)

  1. 一种通过用户界面进行图片识别的交互方法,其中,包括:
    在所述用户界面中提供多个第一功能控件、至少一个第二功能控件、至少一个第三功能控件和图片展示控件;
    多个第一功能控件中的每一个呈现的是每一个第一功能控件对应的第一图片,每个第一功能控件中对应的第一图片中的对象的呈现状态是不同的;
    响应于对所述第二功能控件的选定指令,接收用户选择并上传的第二图片;
    在接收对多个第一功能控件中的一个的选择指示或者接收用户选择并上传第二图片的指令后,所述图片展示控件中的图片被更新为第一图片或第二图片,同时更新后的图片的属性信息会在所述用户界面上进行呈现;
    在呈现所述更新后的图片的属性信息后并且接收对所述第三功能控件的选择指示后,所述更新后的图片的属性信息被存储到第一数据列表中。
  2. 根据权利要求1所述的交互方法,所述更新后的图片的属性信息包括文字信息、字母信息和数字信息中的至少一个。
  3. 根据权利要求1所述的交互方法,在呈现所述更新后的图片的属性信息后并且接收对所述第三功能控件的选择指示后,所述更新后的图片的属性信息被存储到第一数据列表中,包括:
    当所述属性信息包含多个时,在接收到选定的属性信息和接收到对所述第三功能控件的选择指示后,所述选定的属性信息被存储到所述第一数据列表中。
  4. 根据权利要求1所述的交互方法,所述呈现状态包含:对象的倾斜状态和对象的畸变状态中的至少一个。
  5. 一种图片识别的用户界面系统,其中,包括在所述用户界面中展现的多个第一功能控件、至少一个第二功能控件和至少一个第三功能控件、图片展示控件;多个第一功能控件中的每一个在所述用户界面中呈现的是每一个 第一功能控件对应的第一图片,每个第一功能控件中对应的第一图片中的对象在所述用户界面中的呈现状态是不同的;
    所述第二功能控件在选择后可操作以使用户选择并上传第二图片;
    所述多个第一功能控件中的一个在选择后或者在用户选择并上传第二图片的操作后可操作以使所述图片展示控件中的图片被更新为第一图片或第二图片,同时使更新后的图片的属性信息会在所述用户界面上进行呈现;
    在呈现所述更新后的图片的属性信息后并且对所述第三功能控件的选择后可操作以使所述更新后的图片的属性信息被存储到第一数据列表中。
  6. 一种电子设备,包括显示屏、存储器和处理器,其中:
    所述存储器,与所述显示屏和处理器连接,被配置为存储计算机指令和保存与所述显示屏关联的数据;
    所述处理器,与所述显示屏和存储器连接,被配置为执行所述计算机指令以使得所述电子设备执行:
    在所述显示屏上显示第一界面,所述第一界面包括第一区域和第二区域,所述第一区域包括至少一个一级功能分类标签,所述第二区域包括多个二级功能分类标签和每个二级功能分类标签所包括的至少一个三级功能分类标签;
    响应于选定所述至少一个一级功能分类标签中的一个的触发指令,所述显示屏在所述第二区域显示选定的所述一级功能分类标签所包括的多个二级功能分类标签和每个二级功能分类标签所包括的至少一个三级功能分类标签;
    响应于选定所述至少一个三级功能分类标签中的一个的触发指令,所述显示屏显示第二界面;所述第二界面包括的内容与所述第一界面包括的内容是不同的。
  7. 根据权利要求6所述的电子设备,其中,所述第一界面还包括第三区域,且所述第三区域、所述第一区域和所述第二区域沿所述第一界面顶端向底端方向依次排列。
  8. 根据权利要求6所述的电子设备,其中,当所述显示屏显示所述第一界面时,响应于在所述第二区域沿第一方向的定向移动触发指令,对应的所 述一级功能分类标签被选定。
  9. 根据权利要求7所述的电子设备,其中,当所述显示屏显示所述第一界面时,响应于沿第二方向的定向移动触发指令,整个所述第一界面相对所述显示屏沿第二方向移动,并且当所述第一界面的第一区域移动到第一界面的顶端时,所述第一区域保持位置固定不变,所述第二区域沿所述第二方向移动。
  10. 根据权利要求6所述的电子设备,其中,当所述显示屏显示所选定的三级功能分类标签对应的第二界面时,所述第二界面包括第一显示区域、第二显示区域、第三显示区域和第四显示区域;
    所述第一显示区域包括三级功能体验对象图或三级功能体验对象效果图;所述第二显示区域包括沿第三方向分布的多个示例图,所述多个示例图中的每一个的尺寸都小于所述所述第一显示区域内图片的尺寸;
    响应于对所述多个示例图中的一个的选定指令,所述第一显示区域显示选定的示例图对应的效果图和完整图,所述效果图对应于所述所选定的三级功能分类标签对应的功能;
    响应于沿所述第三方向的定向移动触发指令,所述多个示例图在所述第二显示区域同时沿所述第三方向移动;
    所述第三显示区域包括用于上传图片的第一操作标签;
    所述第四显示区域包括与所选定的三级功能分类标签相关联的应用场景标签。
  11. 根据权利要求10所述的电子设备,其中,所述第一显示区域、第二显示区域、第三显示区域和第四显示区域在第二界面沿第四方向排布,所述第三方向为所述显示屏上的水平方向,所述第四方向为所述显示屏上的垂直方向。
  12. 根据权利要求10所述的电子设备,其中,当所述显示屏显示所选定的三级功能分类标签对应的第二界面时,所述第二界面还包括第五显示区域;
    所述第五显示区域显示所述完整图的属性信息。
  13. 根据权利要求12所述的电子设备,其中,所述第三显示区域还包括第二操作标签;响应于对所述第二操作标签的选定,所述属性信息被保存到属性信息数据列表中。
  14. 根据权利要求6所述的电子设备,其中,当所述显示屏显示所选定的三级功能分类标签对应的第二界面时,所述第二界面包括第一显示区域、第二显示区域和第三显示区域;
    响应于在所述第二显示区域的输入信息,所述显示屏的第二界面的第一显示区域生成对应所述输入信息的第一转换图像;
    所述第三显示区域包括第三操作标签和第四操作标签,所述第三操作标签用于将所述第一转换图像转换为第二转换图像,所述第四操作标签用于将所述第一转换图像转换为第三转换图像,所述第二转换图像和第三转换图像是不同的。
  15. 一种电子设备的交互方法,其中,所述电子设备包含显示屏,所述方法包括:
    控制所述显示屏显示第一界面,所述第一界面包括第一区域和第二区域,所述第一区域包括至少一个一级功能分类标签,所述第二区域包括多个二级功能分类标签和每个二级功能分类标签所包括的至少一个三级功能分类标签;
    响应于选定所述至少一个一级功能分类标签中的一个的触发指令,控制所述显示屏在所述第二区域显示选定的所述一级功能分类标签所包括的多个二级功能分类标签和每个二级功能分类标签所包括的至少一个三级功能分类标签;
    响应于选定所述至少一个三级功能分类标签中的一个的触发指令,控制所述显示屏显示第二界面;所述第二界面包括的内容与所述第一界面包括的内容是不同的。
PCT/CN2019/121748 2019-11-28 2019-11-28 图片识别的用户界面系统、电子设备及交互方法 WO2021102850A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2019/121748 WO2021102850A1 (zh) 2019-11-28 2019-11-28 图片识别的用户界面系统、电子设备及交互方法
US17/255,458 US20210373752A1 (en) 2019-11-28 2019-11-28 User interface system, electronic equipment and interaction method for picture recognition
CN201980002707.7A CN113260970B (zh) 2019-11-28 2019-11-28 图片识别的用户界面系统、电子设备及交互方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/121748 WO2021102850A1 (zh) 2019-11-28 2019-11-28 图片识别的用户界面系统、电子设备及交互方法

Publications (1)

Publication Number Publication Date
WO2021102850A1 true WO2021102850A1 (zh) 2021-06-03

Family

ID=76129786

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/121748 WO2021102850A1 (zh) 2019-11-28 2019-11-28 图片识别的用户界面系统、电子设备及交互方法

Country Status (3)

Country Link
US (1) US20210373752A1 (zh)
CN (1) CN113260970B (zh)
WO (1) WO2021102850A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114779975A (zh) * 2022-03-31 2022-07-22 北京至简墨奇科技有限公司 指掌纹图像检视界面的处理方法、装置及电子系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108205406A (zh) * 2016-12-19 2018-06-26 三星电子株式会社 电子设备及其图像同步方法
CN108415635A (zh) * 2017-02-10 2018-08-17 广州森成和信息技术有限公司 一种图片分享系统
CN109085982A (zh) * 2018-06-08 2018-12-25 Oppo广东移动通信有限公司 内容识别方法、装置及移动终端
CN110097057A (zh) * 2018-01-31 2019-08-06 精工爱普生株式会社 图像处理装置以及存储介质
CN110135929A (zh) * 2018-02-02 2019-08-16 英属开曼群岛商玩美股份有限公司 实行于虚拟化妆应用程序的系统、方法及存储媒体

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10331297B2 (en) * 2014-05-30 2019-06-25 Apple Inc. Device, method, and graphical user interface for navigating a content hierarchy
US9606716B2 (en) * 2014-10-24 2017-03-28 Google Inc. Drag-and-drop on a mobile device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108205406A (zh) * 2016-12-19 2018-06-26 三星电子株式会社 电子设备及其图像同步方法
CN108415635A (zh) * 2017-02-10 2018-08-17 广州森成和信息技术有限公司 一种图片分享系统
CN110097057A (zh) * 2018-01-31 2019-08-06 精工爱普生株式会社 图像处理装置以及存储介质
CN110135929A (zh) * 2018-02-02 2019-08-16 英属开曼群岛商玩美股份有限公司 实行于虚拟化妆应用程序的系统、方法及存储媒体
CN109085982A (zh) * 2018-06-08 2018-12-25 Oppo广东移动通信有限公司 内容识别方法、装置及移动终端

Also Published As

Publication number Publication date
CN113260970A (zh) 2021-08-13
US20210373752A1 (en) 2021-12-02
CN113260970B (zh) 2024-01-23

Similar Documents

Publication Publication Date Title
AU2023201500B2 (en) Systems, devices, and methods for dynamically providing user interface controls at a touch-sensitive secondary display
US10248994B2 (en) Methods and systems for automatically searching for related digital templates during media-based project creation
US8799829B2 (en) Methods and systems for background uploading of media files for improved user experience in production of media-based products
Liu Natural user interface-next mainstream product user interface
US8108776B2 (en) User interface for multimodal information system
US10649618B2 (en) System and method for creating visual representation of data based on generated glyphs
US20140045163A1 (en) Interactive response system and question generation method for interactive response system
JP2024501558A (ja) 表示制御方法、装置、電子機器及び媒体
TW202046082A (zh) 聊天執行緒的顯示方法、電腦可讀取記錄媒體及電腦裝置
WO2021102850A1 (zh) 图片识别的用户界面系统、电子设备及交互方法
US11625148B2 (en) Intelligent snap assist recommendation model
Qi et al. Visual design of smartphone app interface based on user experience
WO2022245483A1 (en) Management of presentation content including generation and rendering of a transparent glassboard representation
CN112269520A (zh) 元素显示控制方法、装置、交互平板及存储介质
KR102370552B1 (ko) 형용사 키워드 기반 로고 생성 시스템 및 그 방법
Huang et al. Research on the Communication Mode of Mobile Applications Under the Human-Computer Interaction Mode
Fischer End-User Programming of Virtual Assistant Skills and Graphical User Interfaces
CN116893806A (zh) 流程编辑器、流程编辑方法、电子设备及存储介质
CN116245615A (zh) 搜索方法、装置和电子设备
Atanasova et al. Adaptive user interfaces in software systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19954449

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19954449

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19954449

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08.02.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 19954449

Country of ref document: EP

Kind code of ref document: A1