CN110120086B - Man-machine interaction design method, system and data processing method - Google Patents
Man-machine interaction design method, system and data processing method Download PDFInfo
- Publication number
- CN110120086B CN110120086B CN201810117056.3A CN201810117056A CN110120086B CN 110120086 B CN110120086 B CN 110120086B CN 201810117056 A CN201810117056 A CN 201810117056A CN 110120086 B CN110120086 B CN 110120086B
- Authority
- CN
- China
- Prior art keywords
- user
- input
- design
- profile information
- human
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013461 design Methods 0.000 title claims abstract description 114
- 238000000034 method Methods 0.000 title claims abstract description 106
- 230000003993 interaction Effects 0.000 title claims description 36
- 238000003672 processing method Methods 0.000 title claims description 10
- 238000004458 analytical method Methods 0.000 claims description 13
- 238000007405 data analysis Methods 0.000 claims description 8
- 238000012986 modification Methods 0.000 claims description 8
- 230000004048 modification Effects 0.000 claims description 8
- 238000004891 communication Methods 0.000 claims description 5
- 238000010801 machine learning Methods 0.000 claims description 4
- 238000012790 confirmation Methods 0.000 claims description 3
- 230000002207 retinal effect Effects 0.000 claims description 2
- 238000012545 processing Methods 0.000 claims 2
- 238000013480 data collection Methods 0.000 claims 1
- 238000010200 validation analysis Methods 0.000 claims 1
- 230000008569 process Effects 0.000 description 19
- 230000015654 memory Effects 0.000 description 7
- 238000012938 design process Methods 0.000 description 5
- 241000282414 Homo sapiens Species 0.000 description 4
- 230000000875 corresponding effect Effects 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 230000000052 comparative effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 210000004899 c-terminal region Anatomy 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000007418 data mining Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000010422 painting Methods 0.000 description 2
- 230000001737 promoting effect Effects 0.000 description 2
- LFQSCWFLJHTTHZ-UHFFFAOYSA-N Ethanol Chemical compound CCO LFQSCWFLJHTTHZ-UHFFFAOYSA-N 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/80—Creating or modifying a manually drawn or painted image using a manual input device, e.g. mouse, light pen, direction keys on keyboard
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A design method, comprising: receiving a first input; obtaining profile information associated with the user based on the first input; obtaining design information associated with the profile information from the profile information; receiving a second input at the user interface; the design information is displayed on the user interface based on the second input.
Description
Technical Field
The present disclosure relates to a design method, and more particularly, to a man-machine interaction design method, system, and data processing method.
Background
With the development of information technology and the progress of global economy, both in the physical world and in the internet world, products or service providers thereof need to update their own advertisement files more quickly. For example, webstores need to change a large number of store promotional pictures, videos, audios with different content to match various promotions, at different times of the year (e.g., double 11 shopping festival, spring festival, etc.). The current advertising documents are usually implemented on a computer by a professional designer using professional design software. The method has higher requirements for designers, and usually requires the designers to have painting foundations and higher professional software operation capacity, so that the method is long in time consumption and high in cost; also, modifications of these designs also require specialized personnel to accomplish, and thus the cost of modification and maintenance is high.
On the other hand, the current design manner of the advertisement file based on the computer is deviated from the most natural design manner of the human, for example, the human usually draws lines and shapes when drawing, and the current design software usually executes the whole design process (for example, adjusting the size of the image) in the form of program or input number. Therefore, the man-machine interaction mode in the current design process is not natural enough, and therefore, the method is not beneficial to people without design basis to quickly learn and enter the design.
At present, one design solution is: a series of similar patterns are selected from the system based on a pattern entered by the user and recommended to the user. This approach is suitable for simple design by users who do not draw. However, this method cannot obtain a relatively complex design file, and in addition, the man-machine interaction process of this method is not natural enough.
Based on the above, there is a need for a design method that can enable a person without a design foundation to quickly and reliably obtain a desired design file, and the man-machine interaction of the entire design process conforms to the most natural way of a human being.
Disclosure of Invention
According to an embodiment of one aspect of the present disclosure, there is provided a design method including: receiving a first input; obtaining profile information associated with a user based on the first input; obtaining design information associated with the profile information from a database in accordance with the profile information; receiving a second input at the user interface; the design information is displayed on the user interface based on the second input.
According to another aspect of the disclosure, there is provided a non-transitory storage medium storing a set of instructions that, when executed by a processor, cause the processor to perform the following process: receiving a first input; obtaining profile information associated with a user based on the first input; obtaining design information associated with the profile information according to the profile information; receiving a second input at the user interface; the design information is displayed on the user interface based on the second input.
According to another aspect of the disclosure, there is provided a mobile device comprising a non-transitory storage medium storing a set of instructions that, when executed by a processor, cause the processor to perform the following process: receiving a first input; obtaining profile information associated with a user based on the first input; obtaining design information associated with the profile information according to the profile information; receiving a second input at the user interface; the design information is displayed on the user interface based on the second input.
According to another aspect of the present disclosure, there is provided a system comprising a server and a client, the server comprising a non-transitory storage medium storing a set of instructions that when executed by a processor, cause the processor to perform the following process: receiving a first input; obtaining profile information associated with a user based on the first input; obtaining design information associated with the profile information according to the profile information; receiving a second input at the user interface; the design information is displayed on the user interface based on the second input.
According to another aspect of the present disclosure, there is provided a human-machine interaction design apparatus, including a human-machine interaction interface; at least one server; the processor is in communication connection with the server through a communication link; a non-transitory storage medium storing a set of instructions that, when executed by a processor, enable the processor to perform the process of: receiving a first input; obtaining profile information associated with a user based on the first input; obtaining design information associated with the profile information according to the profile information; receiving a second input at the user interface; the design information is displayed on the user interface based on the second input.
According to another aspect of the present disclosure, there is provided a human-computer interaction data processing method, including: the server receives a first input; the server deriving profile information associated with the user based on the first input; the server obtains design information associated with the profile information based on the profile information; the server receives a second input; the server parses the second input and obtains parameter values for the second input, the parameter values comprising at least one of: length, shape, area, or a combination thereof; and selecting corresponding design information based on the parameter value and sending the selected design information to a user side.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the exemplary embodiments of the disclosure and together with the description serve to explain the disclosure and do not constitute an undue limitation on the disclosure. In the drawings:
1-1 are a portion of a flow according to some embodiments;
FIGS. 1-2 are a portion of a flow according to some embodiments;
2-1 are one of exemplary steps according to some embodiments;
2-2 are one of exemplary steps according to some embodiments;
2-3 are one of exemplary steps according to some embodiments;
FIGS. 2-4 are one of exemplary steps according to some embodiments;
2-5 are one of exemplary steps according to some embodiments;
FIGS. 2-6 are one of exemplary steps according to some embodiments;
FIGS. 2-7 are one of exemplary steps according to some embodiments;
2-8 are one of exemplary steps according to some embodiments;
FIGS. 2-9 are one of exemplary steps according to some embodiments;
FIGS. 2-10 are one of exemplary steps according to some embodiments;
FIGS. 2-11 are one of exemplary steps according to some embodiments;
FIGS. 2-12 are one of exemplary steps according to some embodiments;
FIG. 3 is a schematic diagram of a mobile device according to some embodiments;
FIG. 4 is a schematic diagram of a system according to some embodiments;
Detailed Description
The foregoing summary, as well as the following detailed description of certain embodiments, will be better understood when read in conjunction with the appended drawings. To the extent that the diagrams illustrating functional blocks of some embodiments are not necessarily indicative of the division between hardware circuitry. Thus, for example, one or more of the functional blocks (e.g., processors or memories) may be implemented in a single piece of hardware (e.g., a general purpose signal processor or a block of random access memory, hard disk, or the like) or multiple pieces of hardware. Similarly, the programs may be stand alone programs, may be incorporated as routines in an operating system, may be functions in an installed software package, and the like. It should be understood that some embodiments are not limited to the arrangements and instrumentality shown in the drawings.
As used in this disclosure, an element or step recited in the singular and proceeded with the word "a" or "an" should be understood as not excluding plural said elements or steps, unless such exclusion is explicitly recited. Furthermore, references to "one embodiment" are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. Embodiments that "comprise," "include," or "have" an element or elements having a particular attribute may include additional such elements not having that attribute unless the contrary is explicitly stated.
Some embodiments provide a human-machine interaction design method 10 as shown in fig. 1-1 and 1-2, the method 10 beginning at 11, after the method 10 begins. A first interaction procedure will take place between the user and the system he/she is operating, i.e. the method 10 will receive a first input at step 12. The first input is used to confirm the identity of the user. The first input may be accomplished in a variety of ways, such as logging in with a conventional account password to confirm the user's identity; or the first input is accomplished by means such as, but not limited to, fingerprint input, voiceprint input, retinal input, etc. that uniquely confirms the identity of the user's body; alternatively, in some embodiments, the first input is accomplished using a virtual private network (VPN: virtual Private Network) or similar access (VPC: virtual Private Cloud) or the like.
In some embodiments, after the first input 12 is completed, the method 10 obtains profile information associated with the user from a database based on the user identity confirmed by the first input, the profile information including information of the user's age, gender, occupation area, and the like. For example, if the user is a seller of a certain network platform (e.g., a skyhook), after completing the first input, the system may obtain user profile information within the scope allowed by law and the scope authorized by the user at step 13. For example, acquiring professional domain information of the user (for example, the industry of the user is food domain, the information of the network store of the user), age and gender of the user, etc. from a database storing information of the network platform seller. In addition, some other information can be obtained by technical means, such as data mining, for example, by statistics and analysis of the user's search history, the areas of interest to the user, such as automobiles, can be presumed.
In some embodiments, profile information associated with the user is obtained directly based on the first input 12. For example, the user directly inputs information of his own age, sex, occupation area, etc. In these embodiments, the method 10 is able to learn profile information through the first input 12 without being associated with any database.
The method 10 includes obtaining information associated with profile information from a database based on the profile information at step 14. The database stores various design information, including a document set, a picture set, a video set, an audio set and the like, which are classified according to the general classification of the social industry or the commodity field, and the elements in the sets are marked one by one so as to facilitate inquiry and indexing.
In some embodiments, after the user profile information is obtained at step 14, the profile information is correlated with various design information in the database. In some embodiments, the associated information in the database is obtained based on the professional domain of the user, for example: if the profile information shows that the industry in which the user is located is a food area, the method 10 will associate a document set, a picture set, a video set, an audio set of the food area in the database and can obtain such design information. It should be noted that "obtaining" herein does not necessarily mean that the document set, the picture set, the video set, and the audio set in the field are downloaded to the system local to the user operates, but means that the system is linked to the database in the following operation and obtains the above information as needed.
In some embodiments, after obtaining the user profile information at step 14, if the user is a web platform vendor, such as a vendor of a skyhook store, the method 10 obtains a data set by performing real-time data acquisition and analysis of the user's store within the legally permitted data acquisition range and recommends the user in a subsequent process based on the data set. For example, after obtaining pictures and advertisements in a user's store, they are used to obtain a data set according to how often they occur. It should be understood that: the data sets may exist in a tabular ordered form or may exist in any other suitable manner of storing data in a database.
In some embodiments, after acquiring professional information in the user profile information at step 14, the method 10 may analyze the industry in which the user is located by big data analysis and obtain a data set, and recommend the user in a subsequent process according to the data set. The big data analysis here may be done in real time. For example, with knowledge that the user is engaged in the wine industry, method 10 scans and analyzes data for stores of multiple wine merchants of one or more network platforms (e.g., panned and/or kittens) within the legally permitted data acquisition range and recommends to the user in a subsequent process based on the analysis results. The analysis described above includes: ordering the document set, the picture set, the video set, the audio set based on the multidimensional weighting factor, the weighting factor may include: frequency of occurrence, type (class B merchant or class C merchant), region, price range, etc.; the analysis may employ various known algorithms or machine learning methods, such as linear regression, principal component analysis, neural networks, and the like. The result of the analysis may yield a data set, it being understood that: the data sets may exist in a tabular ordered form or may exist in any other suitable manner of storing data in a database.
Table one gives an example of a set of text data obtained by data acquisition and analysis based on a store of a certain user, and after obtaining the advertisements in the store of the user, they are found to be a set of text data including the order, text content and number of text bytes according to how many times they occur. It should be appreciated that text acquisition and analysis for the user's business may also be performed in a manner similar to that described above. But for picture data, a similar approach may be used for ordering.
List one
Ordering of | Text content | Number of text bytes |
1 | * Wine promotion | N1 |
2 | * Aging for ten years | N2 |
3 | * Taste is gentle | N3 |
4 | * Buy and give a gift | N4 |
… | … | … |
In some embodiments, the user begins the design process at step 15 with the user making a second input through the user interface that does not require the user to draw the desired graphic. The user makes the input by drawing lines and simple geometric patterns. For example, when a user draws a straight line segment on a user interface, method 10 automatically selects a document (advertisement) from a text library in a database associated with (e.g., with the user's industry) that is substantially equivalent to the length of the straight line, and replaces the straight line segment drawn by the user with the advertisement; if the user draws a geometric figure (e.g., a triangle) on the user interface, the method 10 automatically selects a picture (advertisement picture) from the library of associated pictures that corresponds to the area of the triangle and replaces the triangle drawn by the user with the selected advertisement picture.
It should be noted that the above-described input-based matching process is performed based on a comparative match of data, for example, as listed in the above table, which has a different number of bytes for different text materials, for example, for the first-ranked text material, "alcohol promotion", which includes N1 bytes; on the other hand, for example, when the user draws a straight line segment on the user interface, the straight line segment will be parsed and a length parameter value of the straight line segment is obtained based on the specific size of the user interface; and matches the length parameter value with the byte count of the text material to obtain the required text material/data.
The parsing process may be performed by using the ratio of the length of the straight line segment to the length of the size of the user interface, and for a section of the straight line segment with the same length in the geometric sense, the ratio of the straight line segment to the length of the straight line segment on different user interfaces, for example, a mobile phone screen and a desktop computer display is different, so that the parameter values parsed by the same straight line segment on different user interfaces may be different, and thus the obtained text material may also be different.
It should be appreciated that a similar approach to that described above may also be employed for matching picture material for the geometry drawn by the user. The geometry is parsed to obtain its shape parameters, such as triangles, rectangles, etc. Further, the geometric figure is parsed to obtain its area value, and the area parameter of the figure is obtained according to the ratio of the area value and the area of the user interface (e.g. the area of a 14 inch display), and then the picture material can be matched according to the shape parameter and the area parameter, or the picture material/data can be matched according to the area parameter only.
In some embodiments, the method 10 pushes various stories to a user through data mining and analysis based on user profile information. In some embodiments, "push" refers to the method 10 selecting respective suitable text material, picture material, video material, audio material, etc. from the second database based on the user profile information when the user makes the second input. For example, in addition to the above-described industries according to which the user is engaged, the corresponding material may be pushed to the user based on a weight considering information of the user's gender, age, etc. In some embodiments, the field of interest of the user can be presumed based on the webpage information frequently browsed by the user, and corresponding materials can be pushed to the user on the basis. In some embodiments, the method 10 may recommend to the user based on the obtained data set obtained by data acquisition and analysis of the user's store, or may recommend to the user based on the obtained data set obtained by analysis of the industry in which the user is located by big data analysis.
In some embodiments, the recommendation process described in the previous paragraph may be implemented by means of user and interface interaction selection, for example, when the user draws a straight line, that is, recommended text materials appear next to the straight line, and the text materials are arranged in order from high to low, where the arrangement order may be determined by any one or several analysis modes described above. Similarly, other picture materials, video materials and audio materials can be implemented by adopting a mode of interactive selection of a user and an interface. Further, in some embodiments, if machine learning methods are employed to rank the various materials, after the user selects certain materials in the interface interactive selection mode, the selected materials may be used as parameters and fed back to the method 10 for further training and tuning of the machine learning method.
The line drawn by the user is not necessarily limited to a straight line segment, but can be a curved line segment; simple geometric figures drawn by the user include: rectangular, parallelogram, triangular, circular, oval, etc. It should be noted that: the straight line segment drawn by the user is not required to be a straight line segment in the geometric sense, and can be a straight line which can be distinguished by ordinary people, and the same is true for simple geometric figures.
In some embodiments, specific geometric patterns are mapped to specific categories of design information. For example, if the user enters a triangle, the method 10 selects the picture material; when a user enters a rectangle, the method 10 selects video material; when the user enters a circle, the method 10 selects audio material. The above-described mapping relationship is not determined but may be adjusted as needed. Through the mapping relations, the user can obtain the required materials more conveniently. It should be noted that: the user does not necessarily need to select each of the text material, the picture material, the video material, the audio material in its entirety, and the user can freely select one or more of them to complete the design.
In some embodiments, various forms of user interface are used, such as a conventional display screen, and other types of user interface means may be included, such as an AR/VR helmet, 3D display, etc., so long as the means can accomplish human-machine interaction. The operation input of the user can be input by a common keyboard and mouse, or can be input by gestures or voice. In some embodiments, the gesture input includes a user input by touching the user interface; or motion capture inputs may be employed so long as the inputs may cause the user to feel that he/she is drawing lines or geometric patterns on the user interface.
In some embodiments, the method 10 obtains time point information when the user makes the first input and/or the second input, for example, the user makes the first input and the second input at 20:30 and 31 minutes at 30 nights of 10 months and 30 nights, and the system records the time point information, and the accuracy of the time point information is determined by the system according to the setting, and can be accurate to day or hour or minute, and generally can be accurate to day. These recording time point information are used in the subsequent flow.
In some embodiments, after the user has completed the second input, a selection may be made at step 16 as to whether to modify the text material, the picture material, the video material, the audio material. For example, the user may be dissatisfied with the text (advertisement) presented by method 10, in which case the user may modify the text (advertisement) by himself. In some embodiments, the method 10 also provides for automatic modification of the associated material. When the user modifies the text (advertisement), one or more of the remaining picture material, video material, audio material is automatically changed in accordance with the user's modification, which is based on the association between the data in the database. For example, if a user modifies "food" in an advertisement to "home", the corresponding original picture of food will be changed to a picture of home, as will video and audio material.
In some embodiments, pushing various material to the user is aided by the recorded point-in-time information. For example, if the user begins to design at day 10 and 30, the nearest network platform shopping holiday to day 10 and 30 is 11 months and 11 days. The method 10 justifies that the user has a greater likelihood of designing for the promotional program of double 11. The method 10 carries the information of "double 11" in the text material pushed to the user. The same is true for other points in time, e.g., near spring festival, national celebration.
In some embodiments, after the user has completed the modification, step 17 will be entered; in some embodiments, the user may also choose not to modify after step 15, in which case the user would go directly to step 17.
In some embodiments, in step 17, the user selects a design mode. Such as B-end customers (Business) and C-end customers (Customer) as are well known in the art of network transaction platforms; method 10 provides a user with a choice of design patterns for either To B-side advertisements or To C-side advertisements; if the user's primary customer is a merchant, he/she may select the To B-side design mode; and if the user's primary customer is a consumer, he/she may select the To C-terminal design mode. Different design patterns correspond To different design styles, e.g., for the To B end design pattern, the overall style may tend To be mature and stable; whereas for the To C-terminal design mode, the overall style may tend To be enthusiastic.
In some embodiments, in step 17, the user selects a composition mode. For example, modes such as center, left and right, diagonal, up and down, etc. can be selected; the user also selects a color mode, for example, may select a light, natural, rich, etc. mode; the selection of these different typesetting modes and color modes will be reflected in the final design. It should be understood that: the options of the various design modes, layout modes, and color modes herein are not limited, but are specifically set as needed.
When the user has completed the selection of the design mode, typesetting mode, color mode, the completed design is selected (the user interface is provided with a complete design button for user confirmation), and the design is generated in step 18.
In some embodiments, the user-associated text material, picture material, video material, audio material, or the like is used as a basis for the user-associated text material, picture material, video material, or the like. The user may also simply not select the design mode, typesetting mode, or color mode described above, or may simply select one or both to complete the design, with the method 10 using a recommended default mode for modes that are not selected by the user.
In some embodiments, the method 10 automatically completes the design, combines the various materials described above and adds a background to complete the entire design, and displays the completed design on a user interface at step 18. The above-mentioned process of combining and adding background is completed by machine, and the machine is completed according to set program or is completed by adopting self-learning mechanism or AI (artificial intelligence) mode.
In some embodiments, after step 18 is complete, the user ends method 10 at step 19.
Exemplary descriptions of some embodiments are given below in conjunction with fig. 2-1 through 2-11.
After the user completes the first input, the user's identity is confirmed, assuming that the user is a network platform vendor of the wine product.
Referring to fig. 2-1 and 2-2, the user first draws a straight line segment on the user interface, and then presents a "wine promotion" text material on the user interface having a length substantially equal to or approaching the length of the straight line segment drawn by the user. Referring to fig. 2-2, where the length of text material and the length of a straight line drawn by a user are shown, it is to be appreciated that: straight line segments are shown in dashed lines in fig. 2-2 only for the sake of comparison; in actual operation, when text material appears, the straight line segment disappears.
Subsequently, referring to fig. 2-3, the user draws a triangle on the user interface. It is to be understood that while the triangles in fig. 2-3 are geometric/regular triangles, this is for illustration only. In practice, the user may draw a triangle whose sides are not perfectly straight segments, but this does not prevent the implementation of some embodiments. When the user completes the triangle, the picture material appears in the user interface in an area substantially equivalent to or close to the area of the triangle drawn by the user (fig. 2-4). Similar to the relevant description of the previous paragraph, although a dashed triangle is shown in fig. 2-4, this is also merely for the purpose of comparing the triangle with the picture. In practice, the triangle drawn by the user disappears after the picture material appears.
As a comparison example, in some embodiments, after the user draws a straight line, instead of directly presenting text material, an interface frame appears next to the straight line, where the interface frame is a plurality of text materials recommended to the user, and the plurality of text materials may be displayed according to a mode from high to low after being sorted by the above-mentioned analysis mode, as illustrated in fig. 2-5, where four text materials are shown in the interface frame, and the user selects the first (indicated by the mouse arrow); if the user is not satisfied with the text boxes appearing in the interface box, more text material can be obtained by pulling down the progress bar on the right. The above process is also applicable to the selection of picture material, video material and audio material.
Next, the user selects the design mode, and in fig. 2-5, the user selects the "consumer advertisement/To C" mode; 2-6, the user may continue to select the typesetting mode and the color mode, e.g., the user selects the left-right mode and the natural mode; the user then clicks to complete the design, resulting in the design drawings illustrated in fig. 2-7.
As a comparative example, if the user still selects the "consumer advertisement/To C" mode in the above-described process, but the left-right mode is not selected but the up-down mode is selected, and the natural mode is not selected but the elegant mode is selected. When the user clicks to complete the design, the design drawing is obtained as illustrated in fig. 2-8.
As a comparative example, the user may modify the text material presented by method 10 to be unsatisfactory, and the user modifies the "wine promotion" to "home" referring to fig. 2-9. In this case, the method 10 matches the new picture material for the user and asks the user for confirmation. When confirmed by the user, the method 10 displays the newly matched picture material on the user interface and deletes the previous picture material. As can be seen from fig. 2-9, the original wine picture has been replaced with a home picture to match the text information entered by the user.
In some embodiments, the method 10 pushes relevant design information to the user in accordance with the point-in-time information. For example, if the user is still a web platform vendor of the wine product described above, who/she begins the design on day 10 and 30, the method 10 will have "double 11" information in the text material pushed to the user, with the final design results being illustrated in FIGS. 2-11, with the understanding that: the method 10 may also push one or more of picture material, video material, audio material with "double 11" or other substantially equivalent information to the user.
In some embodiments, a system is provided comprising: means for receiving a first input; means for obtaining profile information associated with the user based on the first input; means for obtaining design information associated with the profile information from a database based on the profile information; means for receiving a second input at the user interface; means for displaying a design based on a second input on the user interface. With the above system, the user completes the various embodiments of method 10 described above.
In some embodiments, a non-transitory storage medium is provided that stores a set of instructions that, when executed by a processor, enable the processor to perform the following process: receiving a first input; obtaining profile information associated with the user based on the first input; obtaining design information associated with the profile information from a database in accordance with the profile information; receiving a second input at the user interface; the design information is displayed on the user interface based on the second input.
Fig. 3 shows an example of a mobile device, which is a smart phone, the smart phone comprising a non-transitory storage medium storing a set of instructions and a processor executing the set of instructions to perform the following process: receiving a first input; obtaining profile information associated with the user based on the first input; obtaining design information associated with the profile information from a database in accordance with the profile information; receiving a second input at the user interface; the design information is displayed on the user interface based on the second input. With the mobile device described above, a user may accomplish the various embodiments of method 10 described above.
FIG. 4 shows a human-machine interaction design apparatus 40 comprising a display 43; at least one server 44, the server 44 storing a database including design information; the processor 42, the processor 42 and the display 43 are communicatively coupled, and the processor 42 and the server 44 are communicatively coupled via a communication link 45, where the processor 42 may be a physical conventional processor or a virtual machine-based processor. Through the human-computer interaction design device 40, the user 41 can complete the various embodiments of the method 10.
In some embodiments, human-machine interaction design methods, systems, non-transitory storage media, mobile devices, and human-machine interaction design devices are presented based on the present disclosure; based on the disclosure, a user can quickly and conveniently complete design and modification without having a foundation for painting and operating specialized software; on the other hand, the design process of the user accords with the most natural interaction mode of human beings, and is easy to learn and accept.
It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the embodiments (and/or aspects thereof) described above may be used in combination with one another. In addition, many modifications may be made to adapt a particular situation or material to the teachings of some embodiments without departing from the scope thereof. While the dimensions and types of materials described herein are intended to define the parameters of some embodiments, the embodiments are by no means limiting and are exemplary embodiments. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of some embodiments should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms "including" and "in which" are used as the plain-language equivalents of the respective terms "comprising" and "wherein. Furthermore, in the appended claims, the terms "first," "second," and "third," etc. are used merely as labels, and they are not intended to impose numerical requirements on their objects. In addition, no limitation of the appended claims is written in a means-plus-function format unless and until such limitation of the claims clearly uses the phrase "means for …," following a functional statement without additional structure.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be apparent to those skilled in the art that some embodiments of the present disclosure may be provided as a method, apparatus, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
This written description uses examples to disclose some embodiments, including the best mode, and also to enable any person skilled in the art to practice some embodiments, including making and using any devices or systems and performing any incorporated methods. The patentable scope of some embodiments is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.
Claims (42)
1. A man-machine interaction design method comprises the following steps:
receiving a first input;
obtaining profile information associated with a user based on the first input;
obtaining design information associated with the profile information based on the profile information;
receiving a second input at the user interface; the second input comprises a user drawing lines and/or graphics on the user interface;
selecting corresponding design information based on the second input;
generating a design result based on the second input and the selected design information;
and displaying the generated design result on the user interface.
2. The method of claim 1, wherein the first input comprises a user identity information confirmation.
3. The method of claim 2, wherein the user identity information validation comprises at least one of: proprietary network access, account password input, fingerprint input, voiceprint input, retinal input.
4. A method according to claim 3, wherein the second input comprises at least one of: mouse input, keyboard input, gesture input, voice input.
5. The method of claim 4, wherein the gesture input comprises a user input through a touch user interface or a motion capture input.
6. The method of claim 5, wherein the user interface comprises at least one of: display screen, AR/VR helmet, 3D display.
7. The method of claim 6, further comprising: time information is obtained when the first input and/or the second input is made.
8. The method of claim 7, wherein the profile information comprises at least one of: age, gender, occupation area, area of interest of the user.
9. The method of claim 8, wherein the design information associated with the profile information includes at least one of: literal material, picture material, video material, sound material associated with the user's professional domain.
10. The method of claim 9, further comprising: selecting at least one of the following according to the age of the user: the text material, the picture material, the video material and the sound material.
11. The method of claim 10, further comprising: selecting at least one of the following according to the time information: the text material, the picture material, the video material and the sound material.
12. The method of claim 11, further comprising: design information associated with the profile information is obtained from a database.
13. The method of claim 12, further comprising: design information associated with the profile information is obtained through real-time data analysis.
14. The method of claim 13, wherein the real-time data analysis comprises at least one of: data acquisition and analysis are carried out on shops of the network platform for users; and collecting and analyzing data of stores of a plurality of network platform sellers in the industry of the user.
15. The method of claim 14, wherein the data collection and analysis is performed on the stores buying the plurality of network platforms in the industry of the user by using a machine learning method.
16. The method of claim 1, wherein the line comprises a straight line or a curved line; the graph comprises a rectangle, a parallelogram, a triangle, a circle and an ellipse.
17. The method of claim 1, wherein the design information associated with the profile information comprises: and the text material is close to the length of the straight line input by the user.
18. The method of claim 17, wherein the design information associated with the profile information comprises: and the picture material is close to the graph size input by the user.
19. The method of claim 18, wherein the design information associated with the profile information comprises: a plurality of text materials are arranged in a sequence.
20. The method of claim 18, wherein the design information associated with the profile information comprises: and ordering the plurality of picture materials.
21. The method of any of claims 19-20, wherein the user can modify at least one of the text material or the picture material.
22. The method of claim 21, wherein when the user modifies one of the text material or the picture material, unmodified remaining material associated with the text material or the picture material is obtained from a database and automatically replaced in accordance with the user's modification.
23. The method of claim 22, further comprising: and automatically combining the picture material and the text material to generate a design drawing.
24. The method of claim 23, further comprising: the user selects a design pattern and generates a corresponding design drawing based on the selected design pattern.
25. The method of claim 24, further comprising: the user selects a typesetting mode and generates a corresponding design drawing according to the selected typesetting mode.
26. The method of claim 25, further comprising: the user selects a color pattern and generates a corresponding design drawing based on the selected color pattern.
27. A human-machine interaction design system, comprising:
means for receiving a first input;
means for obtaining profile information associated with a user based on the first input;
means for obtaining design information associated with the profile information based on the profile information;
means for receiving a second input at the user interface; the second input comprises a user drawing lines and/or graphics on the user interface;
means for selecting corresponding design information based on the second input;
means for generating a design result based on the second input and the selected design information;
means for displaying the generated design results on the user interface.
28. A non-transitory storage medium storing a set of instructions that, when executed by a processor, enable the processor to perform the method of any one of claims 1-26.
29. A mobile device, comprising: a processor and the non-transitory storage medium of claim 28.
30. A human-machine interaction design system, comprising: a server and a client, the server comprising the non-transitory storage medium of claim 28.
31. A human-machine interaction design apparatus, comprising:
a human-computer interaction interface;
at least one server;
the processor is in communication connection with the server through a communication link;
the non-transitory storage medium of claim 28.
32. The human-computer interaction design device of claim 31, wherein the processor is a virtualized processor.
33. The human-machine interaction design device of claim 31, wherein the human-machine interaction interface comprises at least one of: display screen, AR/VR helmet, 3D display.
34. The human-machine interaction design device of claim 33, wherein the design information is provided in a ranked manner on the human-machine interaction interface for selection by a user.
35. A man-machine interaction data processing method comprises the following steps:
the server receives a first input;
the server deriving profile information associated with the user based on the first input;
the server obtains design information associated with the profile information based on the profile information;
the server receives a second input; the second input comprises a user drawing lines and/or graphics on a user interface;
the server parses the second input and obtains parameters of the second input, the parameters including at least one of: length parameters, shape parameters, area parameters;
and selecting corresponding design information based on the parameters and sending the selected design information to the user side.
36. The human-machine interaction data processing method of claim 35, wherein the profile information includes store information of a user.
37. A method of processing human-machine interaction data as in claim 36, wherein the profile information comprises store information for other stores in the industry where the user is located.
38. The human-computer interaction data processing method according to claim 37, wherein the profile information includes a data set obtained by sorting data analysis of store information of the user and store information of the other stores, wherein the data set includes: at least one of a text data set, a picture data set, an audio data set and a video data set.
39. The human-machine interaction data processing method of claim 38, wherein the length parameter comprises: the length of the straight line segment is proportional to the size length of the user interface.
40. The human-machine interaction data processing method of claim 39, wherein the area parameter comprises: the area value of the shape and the scale of the user interface.
41. A method for human-machine interaction data processing according to claim 40, wherein the shape parameters include: rectangular, parallelogram, triangular, circular, oval.
42. The human-machine interaction data processing method of claim 41, wherein the server selecting the corresponding design information based on the parameters comprises: selecting text data matched with the length of the straight line segment according to the length parameter; and selecting the picture data matched with the area of the shape according to the area parameter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810117056.3A CN110120086B (en) | 2018-02-06 | 2018-02-06 | Man-machine interaction design method, system and data processing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810117056.3A CN110120086B (en) | 2018-02-06 | 2018-02-06 | Man-machine interaction design method, system and data processing method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110120086A CN110120086A (en) | 2019-08-13 |
CN110120086B true CN110120086B (en) | 2024-03-22 |
Family
ID=67519898
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810117056.3A Active CN110120086B (en) | 2018-02-06 | 2018-02-06 | Man-machine interaction design method, system and data processing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110120086B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1596406A (en) * | 2001-11-28 | 2005-03-16 | 皇家飞利浦电子股份有限公司 | System and method for retrieving information related to targeted subjects |
CN101651550A (en) * | 2008-08-15 | 2010-02-17 | 阿里巴巴集团控股有限公司 | Method and system for advertisement generation and display and advertisement production and display client |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100161411A1 (en) * | 2008-12-22 | 2010-06-24 | Kindsight | System and method for generating display advertisements from search based keyword advertisements |
US8359616B2 (en) * | 2009-09-30 | 2013-01-22 | United Video Properties, Inc. | Systems and methods for automatically generating advertisements using a media guidance application |
US9270806B2 (en) * | 2011-06-24 | 2016-02-23 | Google Inc. | Graphical user interface which displays profile information associated with a selected contact |
US20140039991A1 (en) * | 2012-08-03 | 2014-02-06 | Elwha LLC, a limited liabitity corporation of the State of Delaware | Dynamic customization of advertising content |
US20140279629A1 (en) * | 2013-03-12 | 2014-09-18 | Salesforce.Com, Inc. | System and method for generating an organization profile based on skill information |
US11151614B2 (en) * | 2014-09-26 | 2021-10-19 | Comcast Cable Communications, Llc | Advertisements blended with user's digital content |
US9825962B2 (en) * | 2015-03-27 | 2017-11-21 | Accenture Global Services Limited | Configurable sharing of user information |
US20170046749A1 (en) * | 2015-07-30 | 2017-02-16 | Venkateswarlu Kolluri | One-click promotional advertising |
-
2018
- 2018-02-06 CN CN201810117056.3A patent/CN110120086B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1596406A (en) * | 2001-11-28 | 2005-03-16 | 皇家飞利浦电子股份有限公司 | System and method for retrieving information related to targeted subjects |
CN101651550A (en) * | 2008-08-15 | 2010-02-17 | 阿里巴巴集团控股有限公司 | Method and system for advertisement generation and display and advertisement production and display client |
Also Published As
Publication number | Publication date |
---|---|
CN110120086A (en) | 2019-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101806169B1 (en) | Method, apparatus, system and computer program for offering a shopping information | |
US10657568B1 (en) | System with interactive user interface for efficiently accessing component-level reviews | |
US10360623B2 (en) | Visually generated consumer product presentation | |
US10860634B2 (en) | Artificial intelligence system and method for generating a hierarchical data structure | |
US9607010B1 (en) | Techniques for shape-based search of content | |
US10740819B2 (en) | Information providing device, method, and non-transitory medium for interactive search refinement | |
US20170039233A1 (en) | Sankey diagram graphical user interface customization | |
CN103970850B (en) | Site information recommends method and system | |
KR20210098884A (en) | A method of providing a fashion item recommendation service using a body shape and purchase history | |
KR102227552B1 (en) | System for providing context awareness algorithm based restaurant sorting personalized service using review category | |
KR20180052489A (en) | method of providing goods recommendation for cross-border E-commerce based on user experience analysis and environmental factors | |
WO2021098310A1 (en) | Video generation method and device, and terminal and storage medium | |
KR20200045668A (en) | Method, apparatus and computer program for style recommendation | |
US20210090105A1 (en) | Technology opportunity mapping | |
KR102458510B1 (en) | Real-time complementary marketing system | |
Moncrieff et al. | An open source, server-side framework for analytical web mapping and its application to health | |
KR101518109B1 (en) | Service method and service system for merchandise branding | |
KR101764361B1 (en) | Method of providing shopping mall service based sns and apparatus for the same | |
US11961060B2 (en) | Systems and methods for assigning attribution weights to nodes | |
CN110120086B (en) | Man-machine interaction design method, system and data processing method | |
KR102421451B1 (en) | Systems and methods for efficient management and modification of images | |
CN110851568A (en) | Commodity information processing method, terminal device and computer-readable storage medium | |
KR20210111117A (en) | Transaction system based on extracted image from uploaded media | |
Chiera et al. | Visualizing big data: Everything old is new again | |
KR20210063665A (en) | Recommendation item based on user event information and apparatus performing the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40012230 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |