WO2021227951A1 - Dénomination d'élément de page d'extrémité avant - Google Patents

Dénomination d'élément de page d'extrémité avant Download PDF

Info

Publication number
WO2021227951A1
WO2021227951A1 PCT/CN2021/092136 CN2021092136W WO2021227951A1 WO 2021227951 A1 WO2021227951 A1 WO 2021227951A1 CN 2021092136 W CN2021092136 W CN 2021092136W WO 2021227951 A1 WO2021227951 A1 WO 2021227951A1
Authority
WO
WIPO (PCT)
Prior art keywords
target page
page element
name
image
naming
Prior art date
Application number
PCT/CN2021/092136
Other languages
English (en)
Chinese (zh)
Inventor
谢杨易
崔恒斌
Original Assignee
支付宝(杭州)信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 支付宝(杭州)信息技术有限公司 filed Critical 支付宝(杭州)信息技术有限公司
Publication of WO2021227951A1 publication Critical patent/WO2021227951A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • This application relates to the field of computer network technology, and in particular to a method, device and electronic equipment for naming front-end page elements.
  • front-end page development work in order to help improve the readability of the front-end page code and the convenience of maintaining the code in the later stage, developers usually need to name the front-end page elements.
  • This application proposes a method for naming front-end page elements.
  • the above method includes: when the target page element is an image element, calculating the similarity between the target page element and each image in the preset image library; determining the calculated The maximum similarity in the above-mentioned similarity; the name of the image in the above-mentioned preset image library corresponding to the calculation of the above-mentioned maximum similarity is determined as the name of the target page element.
  • calculating the similarity between the target page element and each image in the preset image library includes: inputting the element data of the target page element into a pre-trained classification model for calculation , Obtain the classification result of the target page element, the classification model is a neural network model trained based on a number of samples labeled with the classification result; search for an image that is the same as the classification result of the target page element from the preset image library; Calculate the similarity between the above-mentioned target page elements and the searched images.
  • the above method further includes: when the target page element is a text element, inputting the element data of the target page element into a pre-trained translation model for calculation to obtain the English language corresponding to the target page element String; the above-mentioned English string is determined as the name of the above-mentioned target page element.
  • the above method further includes: converting the traditional characters in the target page elements into simplified characters based on a pre-built mapping algorithm.
  • determining the above-mentioned English character string as the name of the target page element includes: inputting the above-mentioned English character string into a pre-trained keyword extraction model for calculation, and obtaining the same as the above-mentioned English character string.
  • Corresponding keywords; the above keywords are determined as the names of the above target page elements.
  • the above method further includes: if the target page element is a container element, adding an identifier indicating that the target page element is a container element to the name of the target page element.
  • adding an identifier indicating that the target page element is a container element to the name of the target page element includes: extracting keywords from the names of each element in the container element; Words are combined to obtain the name of the target page element; an identifier indicating that the target page element is a container element is added to the name.
  • This application also proposes a device for naming front-end page elements, including: a calculation module, when the target page element is an image element, calculate the similarity between the target page element and each image in the preset image library; first The determining module determines the maximum similarity among the calculated similarities; the second determining module determines the name of the image in the preset image library corresponding to the calculation of the maximum similarity as the name of the target page element.
  • the calculation module includes: inputting the element data of the target page element into a pre-trained classification model for calculation to obtain the classification result of the target page element, and the classification model is based on a number of labeled The neural network model obtained by training the sample of the classification result; searching for the image that is the same as the classification result of the target page element from the preset image library; calculating the similarity between the target page element and the searched images.
  • the above-mentioned device further includes: a model calculation module, when the target page element is a text element, the element data of the above-mentioned target page element is input into a pre-trained translation model for calculation, and the result is the same as that of the above-mentioned target page.
  • the English character string corresponding to the element; the third determining module determines the above English character string as the name of the target page element.
  • the above device further includes: a conversion module, which converts the traditional characters in the target page elements into simplified characters based on a pre-built mapping algorithm.
  • the third determining module includes: inputting the English character string into a pre-trained keyword extraction model for calculation to obtain keywords corresponding to the English character string; Determine the name of the above target page element.
  • the above-mentioned apparatus further includes: an adding module, if the above-mentioned target page element is a container element, add an identifier indicating that the above-mentioned target page element is a container element to the name of the above-mentioned target page element.
  • the above-mentioned adding module includes: extracting keywords from the names of each element in the above-mentioned container element; combining the various keywords to obtain the name of the above-mentioned target page element; adding to the above-mentioned name The identifier indicating that the above target page element is a container element.
  • the above system can calculate the similarity between the target page element and each image in the preset image library, and combine the preset image library with The name of the image corresponding to the maximum similarity in the calculated similarity is determined as the name of the target page element.
  • the above-mentioned system can extract keywords from the above-mentioned text element, and use the extracted keywords as the name of the above-mentioned text element.
  • the above-mentioned system may add an identifier indicating that the above-mentioned target page element is a container element to the name of the above-mentioned target page element, so as to realize the naming of the above-mentioned container element.
  • the element naming method disclosed in this application can realize automatic naming of elements, thereby improving element naming efficiency, element naming standardization, and correctness, avoiding low naming efficiency due to manual participation and failing to strictly comply with naming conventions when naming. Problems such as naming errors.
  • FIG. 1 is a method flowchart of a method for naming front-end page elements shown in this application;
  • FIG. 2 is a method flowchart of the text element naming method shown in this application.
  • FIG. 3 is a method flowchart of the container element naming method shown in this application.
  • FIG. 4 is a structural diagram of a device for naming front-end page elements shown in this application.
  • Fig. 5 is a hardware structure diagram of a naming device for front-end page elements shown in this application.
  • This application aims to propose a method for naming front-end page elements, so that when determining the name of the page element, the page element name determination system realizes the naming of different types of page elements, thereby avoiding low naming efficiency due to manual participation , The naming cannot strictly abide by the naming convention, naming errors and other issues.
  • FIG. 1 is a method flowchart of a method for naming front-end page elements shown in this application. Applied to the page element naming system. As shown in Figure 1, the above method includes:
  • S106 Determine the name of the image in the preset image library corresponding to the calculation of the maximum similarity as the name of the target page element.
  • the above-mentioned page element naming system (hereinafter referred to as the "system") may specifically be a piece of logic code carried in a terminal device.
  • the above-mentioned page element naming system needs to provide computing power through its equipped terminal device when executing the above-mentioned element extraction method as the execution subject.
  • the above system can provide an interactive platform for interacting with developers.
  • developers can provide the page elements that need to be named to the above system, and initiate related instructions for naming the page elements to the above system; on the other hand, when the page elements are named, the above The system can output the named page elements to developers.
  • the above-mentioned front-end page image is specifically a page image designed by a page image designer.
  • a developer develops a front-end page, he usually needs to refer to the page image designed by the page image designer for development, so that the display effect of the final developed front-end page can be the same as the above-mentioned page image.
  • front-end page elements specifically constitute the main components of the front-end page, which may include image elements, text elements, and container elements.
  • the above-mentioned image element specifically refers to an element whose content is an image.
  • the above text element specifically refers to an element whose content is text.
  • the above-mentioned characters may include traditional or simplified characters.
  • the aforementioned container element specifically refers to a collection of elements composed of several elements.
  • several image elements can form a container element.
  • Several text elements can form a container element.
  • Several text elements and several image elements can also form a container element together.
  • the developer when a developer needs to name a certain element, the developer can provide the above-mentioned element and the element type of the element to the above-mentioned system through the interactive platform provided by the above-mentioned system.
  • the aforementioned interactive platform may provide a window for the developer to input the element type of the element to be named.
  • the developer can also input the element type of the aforementioned element in the aforementioned window for the aforementioned system to identify the element type.
  • the above-mentioned system can automatically identify the element type of the above-mentioned element.
  • the above-mentioned system may first perform OCR recognition on the element data corresponding to the above-mentioned element to obtain the recognition result corresponding to the above-mentioned element, and then determine each element according to the above-mentioned recognition result.
  • the element type may be first perform OCR recognition on the element data corresponding to the above-mentioned element to obtain the recognition result corresponding to the above-mentioned element, and then determine each element according to the above-mentioned recognition result.
  • this application Before introducing the specific steps, this application first introduces the principle of determining element types through OCR identification.
  • OCR Optical Character Recognition, Optical Character Recognition
  • OCR Optical Character Recognition, Optical Character Recognition
  • the principle is to compare the image features of the target image with the image features of Chinese characters in the existing Chinese character library, and output the Chinese character that most matches the image features of the target image as the recognition result, and the recognition confidence of the above recognition result.
  • the recognition confidence level may indicate to a certain extent the similarity between the image feature of the target image and the recognition result.
  • the recognition confidence of the recognition result obtained after the OCR detection will be relatively high.
  • the specific content of the target image is a pattern similar to the Chinese character " ⁇ ”.
  • the specific content included in the above target image is only similar to Chinese characters. Therefore, the above-mentioned recognition confidence will be relatively low.
  • the element type of the above element is determined by OCR recognition, after OCR recognition is performed on the element image of the element, it is possible to determine whether the recognition confidence corresponding to the recognition result reaches a preset threshold.
  • Element type The above-mentioned preset threshold may be specifically set by the developer based on experience or trained through a large number of samples, which is not limited here.
  • the recognition confidence level reaches the preset threshold, the element type of the element is determined to be a text element; otherwise, the element type of the element is determined to be an image element.
  • the above-mentioned element is a collection of several text elements or image elements. At this time, it can be determined that the above-mentioned element is a container element.
  • the above system when determining the element type of the element, may input the element data corresponding to the element into a pre-trained classifier for calculation, and determine the element type of the element based on the calculation result.
  • the above-mentioned classifier may be specifically obtained by training based on a number of element image samples marked with element types; the above-mentioned element types include image elements, text elements, and container elements.
  • the above-mentioned classifier may be a multi-classifier constructed based on a neural network.
  • the aforementioned image library may specifically be a pre-configured image library.
  • the aforementioned image library can usually include several named images (images named according to naming conventions).
  • the images included in the above-mentioned image library can be classified and stored.
  • the above-mentioned image library can be divided into several storage spaces; among them, each storage space can store images of the same image type.
  • developers can obtain an image collection that includes several common element images. Then, the developer can name each image in the above-mentioned image collection according to the naming convention, classify the named images (either manually or through a classifier), and save them in the storage space corresponding to the above-mentioned image library.
  • the configured image library can be duplicated and used repeatedly, and it does not need to be configured every time the target element is named. Of course, the configured image library can be updated. For example, adding a new image or updating the name of an existing image, etc.
  • the foregoing system may execute S102 to calculate the similarity between the foregoing target element and each image in the preset image library.
  • the system when calculating the similarity between the target element and each image in the preset image library, the system may first organize the element data of the target element into the form of a feature vector, so as to facilitate similarity. ⁇ Calculation.
  • the above-mentioned system may first extract the image features of the above-mentioned target elements (for example, Harris corner points or SIFT features), and form corresponding feature vectors.
  • the image features of the above-mentioned target elements for example, Harris corner points or SIFT features
  • the foregoing system may execute the following steps S1022-S1026 for each image in the foregoing preset image library:
  • the Euclidean distance between the feature vector included in the target element and the feature vector is less than a preset reference threshold, and the preset mapping algorithm (for example, normalization or standardization algorithm) is used. , The above-mentioned quantity is mapped to the similarity between the above-mentioned image and the above-mentioned target element.
  • the preset mapping algorithm for example, normalization or standardization algorithm
  • the method for calculating the similarity is not limited in this application.
  • the above method for calculating similarity can also be calculated by calculating the cosine distance, Manhattan distance, Mahalanobis distance between feature vectors.
  • the system After completing the above steps for each image in the preset image library, the system will obtain the similarity between the target element and each image, as well as the similarity, and the corresponding relationship with each image.
  • the system can execute S104-S106 to determine the maximum similarity among the calculated similarities, and determine the name of the image in the preset image library corresponding to the calculation of the maximum similarity as the target page element The name.
  • the above-mentioned system in order to improve the efficiency of determining the above-mentioned maximum similarity, can push the obtained above-mentioned similarity into the big top pile (the value corresponding to each parent node in the big top pile is greater than or equal to its left and right children). The value corresponding to the node). Then, the system can read the similarity stored in the root node of the big top pile, and determine the read similarity as the maximum similarity.
  • the root node of the big top heap records the maximum value maintained in the above big top heap . It can be seen that the similarity stored in the root node of the above-mentioned large top pile is the maximum similarity among the obtained similarities.
  • the system can determine the image corresponding to the maximum similarity from the recorded correspondence. After determining the image, the system may determine the name of the image as the name of the target element.
  • the above system when naming front-end page elements, can calculate the similarity between the target page element and each image in the preset image library, and combine the preset image library with The name of the image corresponding to the maximum similarity in the calculated similarity is determined as the name of the target page element mentioned above. Therefore, it is possible to automatically name the element, thereby improving the efficiency of element naming, the standardization of element naming, and the correctness, avoiding The naming efficiency is low due to manual participation, the naming cannot be strictly abided by the naming convention, naming errors and other issues.
  • the element data of the target page element may be first Input the pre-trained classification model for calculation, and obtain the classification result of the target page element.
  • the above classification model is a neural network model obtained by training based on a number of samples labeled with classification results.
  • a number of sample data marked with classification results can be obtained first. After obtaining several sample data, the above sample data can be input into the classification model, and iterative training can be performed until the above classification model converges. At this time, the convergent classification model can be used as the trained classification model.
  • the system can search for images that are the same as the classification result of the target page element from the preset image library, and then calculate the target page element to be similar to the searched images Spend.
  • the system may directly read the image recorded in the storage space corresponding to the classification result.
  • the system may input the image data of each image in the preset image library into the classification model for calculation, thereby obtaining the image data of each image.
  • Image type After that, the above-mentioned system may determine an image whose image type is the same as the image type of the above-mentioned target element as an image with the same classification result of the above-mentioned target page element.
  • the system can continue to execute S104-S106 to determine the maximum similarity among the calculated similarities; the preset image corresponding to the maximum similarity will be calculated
  • the name of the image in the library is determined as the name of the above-mentioned target page element (for detailed steps, please refer to the foregoing content, which will not be described in detail here).
  • FIG. 2 is a method flowchart of the text element naming method shown in this application.
  • the above-mentioned system can first convert the text content of the above-mentioned text element into traditional and simplified form.
  • the above-mentioned system can be equipped with a mapping algorithm for converting traditional characters to simplified characters in advance. Through this mapping algorithm, the above-mentioned system can convert traditional characters in text elements into simplified characters.
  • the above-mentioned mapping algorithm may be an algorithm for converting traditional characters to simplified characters constructed based on the hanlp tool.
  • the algorithm can first segment the above text content according to the text, and then detect whether the divided text is a traditional Chinese character one by one, and if it is, it will be converted to the corresponding simplified character for output; if Otherwise, output the divided text directly.
  • the above algorithm can recombine the output simplified characters into the text content of the above text element.
  • the system can input the element data of the target page element into a pre-trained translation model for calculation to obtain the English character string corresponding to the target page element.
  • the above-mentioned system can be pre-loaded with a trained translation model.
  • the input Chinese text content can be converted into English text content.
  • the aforementioned translation model may be an NLP (Natural Language Processing) model based on seq2seq.
  • NLP Natural Language Processing
  • the model can first segment the text content according to the text, and then use the segmented text as input for semantic encoding to obtain the vector corresponding to the text content.
  • the above-mentioned vector can be decoded into English text content based on the above-mentioned semantic coding and the English word library.
  • the system can select several keywords from the English text content as the name of the text element.
  • the above-mentioned system can be equipped with a keyword extraction model in advance.
  • keywords can be extracted from the input English text content.
  • the above keyword extraction model may be a model constructed based on the TF-IDF algorithm. After receiving the English text content of the text element, the model can first segment each word in the above-mentioned English text, and then count the occurrence frequency (TF, Term Frequency, word frequency) of the segmented word in the text. After calculating the frequency of each word in this text, you can combine the frequency of each word in other English texts (IDF, Inverse Documnet Frequency, inverse document frequency), sort the words, and rank them in the top N Bit words are used as keywords; among them, N is a positive integer preset based on experience.
  • TF Term Frequency, word frequency
  • the keyword extraction model may be an NLP model based on textRank. After receiving the English text content of the text element, the model can first segment each word in the above-mentioned English text. After obtaining the divided words, the above system can combine two adjacent divided words in pairs to obtain all possible combinations, and then calculate the connection weights between the words in the combination. After calculating the connection weights between words in each combination, the above-mentioned system can calculate the sum of the connection weights corresponding to each word, and sort the words in the above-mentioned English text according to the size of the above-mentioned sum. At this time, the above system can use the top N words as keywords; where N is a positive integer preset based on experience.
  • the system may determine the keyword as the name of the text element.
  • the foregoing system may add an identifier indicating that the target page element is a container element to the name of the target page element.
  • FIG. 3 is a method flowchart of the container element naming method shown in this application.
  • the above-mentioned system can first determine the element type of each element included in the above-mentioned container element.
  • the above system can use the method for determining element types disclosed in this application to determine the element types of the above elements one by one.
  • the above-mentioned system can use the naming method for text elements disclosed in this application to name the text element in the above-mentioned container element. After the naming is completed, the system may add an identifier indicating that the target page element is a container element to the name of the text element as the name of the container element. For example, add the characters "contarner" before the name of the above text element.
  • the system may first determine the text element for naming from the container element. Then, the above-mentioned system can use the naming method for text elements disclosed in this application to name the determined text elements. After the naming is completed, the system may add an identifier indicating that the target page element is a container element to the name of the text element as the name of the container element.
  • the aforementioned system may determine the first (last) text element in the aforementioned container element as the text element for naming, and perform subsequent naming.
  • the foregoing system may determine the text element with the largest amount of data among the foregoing container elements as the text element for naming, and perform subsequent naming.
  • the above text element carries an identifier indicating the importance of the above text element (the larger the value indicated by the identifier, the higher the importance of the above text element).
  • the foregoing system may determine the text element with the largest value of the foregoing identifier carried in the foregoing container element as the text element for naming, and perform subsequent naming.
  • the method for determining the text element used for naming can be set according to the actual situation, which is not limited here.
  • the above-mentioned system may first use the method for naming text elements disclosed in this application to extract the keywords of each text element. Then, the above-mentioned system may combine the keywords to obtain the combined keywords, and add an identifier indicating that the target page element is a container element to the combined keywords as the name of the container element.
  • the above-mentioned system may first use the method for naming text elements disclosed in this application to extract the keywords of each text element. Then, the system can determine the most important keyword among the proposed keywords, and add an identifier indicating that the target page element is a container element to the most important keyword as the name of the container element.
  • the above-mentioned system may input each keyword into the keyword extraction model described in this application for calculation, and then use the calculation result as the above-mentioned most important keyword.
  • the aforementioned system can use the naming method for image elements disclosed in this application to name the text element in the aforementioned container element. After the naming is completed, the system may add an identifier indicating that the target page element is a container element to the name of the image element as the name of the container element. For example, add the characters "contarner" before the name of the above text element.
  • the system may first determine the image element used for naming from the container elements. Then, the above system can use the naming method for text elements disclosed in this application to name the determined image elements. After the naming is completed, the system may add an identifier indicating that the target page element is a container element to the name of the image element as the name of the container element.
  • the above-mentioned system may determine the first (last) image element among the above-mentioned container elements as the image element for naming, and perform subsequent naming.
  • the foregoing system may determine the image element with the largest amount of data among the foregoing container elements as the image element for naming, and perform subsequent naming.
  • the above-mentioned image element carries an identifier indicating the importance of the above-mentioned image element (the larger the value indicated by the identifier, the higher the importance of the above-mentioned image element).
  • the above-mentioned system may determine the image element with the largest value of the above-mentioned identifier carried in the above-mentioned container element as the image element for naming, and perform subsequent naming.
  • the method for determining the image element used for naming can be set according to the actual situation, which is not limited here.
  • the aforementioned system may first use the method for naming image elements disclosed in this application to determine the name of each image element. Then, the above system may combine the names of the image elements to obtain a combined name, and add an identifier indicating that the target page element is a container element to the combined name as the name of the container element.
  • the aforementioned system may first use the method for naming image elements disclosed in this application to determine the name of each image element. Then, the above-mentioned system may extract keywords from the determined names of each image element, and add an identifier indicating that the target page element is a container element to the above-mentioned keywords as the name of the container element.
  • the above-mentioned system may input the name of each image element into the keyword extraction model described in this application for calculation, and then use the calculation result as the above-mentioned keyword.
  • the naming method of the aforementioned container element can refer to the aforementioned content, which will not be described in detail here.
  • the system may combine the identifier indicating that the target page element is the container element with the assigned sequence number of the container element, and use the combined result as the name of the container element.
  • sequence numbers assigned to the above-mentioned container elements may be assigned according to actual conditions, which are not limited here.
  • the sequence number to which the above-mentioned container elements are assigned may indicate the order in which the above-mentioned container elements are created.
  • the sequence number assigned to the above-mentioned container element may be a sequence number assigned manually.
  • the above system can calculate the similarity between the target page element and each image in the preset image library, and combine the preset image library with The name of the image corresponding to the maximum similarity in the calculated similarity is determined as the name of the target page element.
  • the above-mentioned system can extract keywords from the above-mentioned text element, and use the extracted keywords as the name of the above-mentioned text element.
  • the above-mentioned system may add an identifier indicating that the above-mentioned target page element is a container element to the name of the above-mentioned target page element, so as to realize the naming of the above-mentioned container element.
  • the element naming method disclosed in this application can realize automatic naming of elements, thereby improving element naming efficiency, element naming standardization, and correctness, avoiding low naming efficiency due to manual participation and failing to strictly comply with naming conventions when naming. Problems such as naming errors.
  • this application also proposes a device for naming front-end page elements.
  • FIG. 4 is a structural diagram of a device for naming front-end page elements shown in this application.
  • the foregoing apparatus 400 may include:
  • the calculation module 410 when the target page element is an image element, calculates the similarity between the target page element and each image in the preset image library;
  • the first determining module 420 determines the maximum similarity among the calculated similarities
  • the second determining module 430 determines the name of the image in the preset image library corresponding to the calculation of the maximum similarity as the name of the target page element.
  • the calculation module 410 includes: inputting the element data of the target page element into a pre-trained classification model for calculation to obtain the classification result of the target page element; wherein, the classification model is based on A neural network model trained by a number of samples labeled with classification results; from a preset image library, search for images that are the same as the classification results of the target page elements; calculate the difference between the target page elements and the searched images Similarity.
  • the above-mentioned device 400 further includes: a model calculation module, when the target page element is a text element, the element data of the above-mentioned target page element is input into a pre-trained translation model for calculation, and the result is the same as the above-mentioned target page element.
  • the English character string corresponding to the page element; the third determining module determines the above-mentioned English character string as the name of the above-mentioned target page element.
  • the device 400 further includes a conversion module, which converts the traditional characters in the target page elements into simplified characters based on a pre-built mapping algorithm.
  • the third determining module includes: inputting the English character string into a pre-trained keyword extraction model for calculation to obtain keywords corresponding to the English character string; Determine the name of the above target page element.
  • the apparatus 400 further includes: an adding module, if the target page element is a container element, add an identifier indicating that the target page element is a container element to the name of the target page element.
  • the above-mentioned adding module includes: extracting keywords from the names of each element in the above-mentioned container element; combining the various keywords to obtain the name of the above-mentioned target page element; adding to the above-mentioned name The identifier indicating that the above target page element is a container element.
  • the embodiment of the device for naming front-end page elements shown in this application can be applied to a device for naming front-end page elements.
  • the device embodiments can be implemented by software, or can be implemented by hardware or a combination of software and hardware. Taking software implementation as an example, as a logical device, it is formed by reading the corresponding computer program instructions in the non-volatile memory into the memory through the processor of the electronic device where it is located.
  • the hardware structure diagram of a naming device for front-end page elements shown in this application except for the processor, memory, network interface, and non-volatile memory shown in Figure 5
  • the electronic device in which the device is located in the embodiment usually includes other hardware according to the actual function of the electronic device, which will not be repeated here.
  • the device includes: a processor; and a memory for storing executable instructions of the processor.
  • the above-mentioned processor is configured to call the executable instructions stored in the above-mentioned memory to implement any one of the above-mentioned methods for naming front-end page elements.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Human Computer Interaction (AREA)
  • Library & Information Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

La présente invention concerne un procédé et un appareil de dénomination d'élément de page d'extrémité avant, et un dispositif électronique. Le procédé comprend les étapes suivantes : si un élément de page cible est un élément d'image, calcul de la similarité entre l'élément de page cible et chaque image dans une base de données d'image prédéfinie (S102) ; détermination de la similarité maximale dans les similarités calculées (S104) ; et détermination du nom de l'image dans la base de données d'image prédéfinie correspondant à la similarité maximale calculée en tant que nom de l'élément de page cible (S106).
PCT/CN2021/092136 2020-05-09 2021-05-07 Dénomination d'élément de page d'extrémité avant WO2021227951A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010384139.6A CN111291208B (zh) 2020-05-09 2020-05-09 前端页面元素的命名方法、装置及电子设备
CN202010384139.6 2020-05-09

Publications (1)

Publication Number Publication Date
WO2021227951A1 true WO2021227951A1 (fr) 2021-11-18

Family

ID=71021032

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/092136 WO2021227951A1 (fr) 2020-05-09 2021-05-07 Dénomination d'élément de page d'extrémité avant

Country Status (2)

Country Link
CN (2) CN112307235B (fr)
WO (1) WO2021227951A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112307235B (zh) * 2020-05-09 2024-02-20 支付宝(杭州)信息技术有限公司 前端页面元素的命名方法、装置及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140289325A1 (en) * 2013-03-20 2014-09-25 Palo Alto Research Center Incorporated Ordered-element naming for name-based packet forwarding
CN106339479A (zh) * 2016-08-30 2017-01-18 深圳市金立通信设备有限公司 一种图片命名方法及终端
CN107463683A (zh) * 2017-08-09 2017-12-12 上海壹账通金融科技有限公司 代码元素的命名方法及终端设备
CN109992266A (zh) * 2017-12-29 2019-07-09 阿里巴巴集团控股有限公司 一种界面元素的处理方法和装置
CN110399586A (zh) * 2019-07-31 2019-11-01 深圳前海微众银行股份有限公司 web界面元素的自动化处理方法、装置、设备及介质
CN111291208A (zh) * 2020-05-09 2020-06-16 支付宝(杭州)信息技术有限公司 前端页面元素的命名方法、装置及电子设备

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140189642A1 (en) * 2013-01-03 2014-07-03 International Business Machines Corporation Native Language IDE Code Assistance
CN107291430A (zh) * 2016-03-31 2017-10-24 富士通株式会社 命名方法和命名系统
JP6881990B2 (ja) * 2017-01-30 2021-06-02 キヤノン株式会社 画像処理装置とその制御方法、及びプログラム
CN107239490B (zh) * 2017-04-24 2021-01-15 北京小米移动软件有限公司 一种命名人脸图像的方法、装置及计算机可读存储介质
WO2020068945A1 (fr) * 2018-09-26 2020-04-02 Leverton Holding Llc Reconnaissance d'entité nommée avec des réseaux convolutifs
CN109543516A (zh) * 2018-10-16 2019-03-29 深圳壹账通智能科技有限公司 签约意向判断方法、装置、计算机设备和存储介质
CN109508191B (zh) * 2018-11-22 2022-03-22 北京腾云天下科技有限公司 一种代码生成方法及系统
CN109828748A (zh) * 2018-12-15 2019-05-31 深圳壹账通智能科技有限公司 代码命名方法、系统、计算机装置及计算机可读存储介质
CN109933528A (zh) * 2019-03-11 2019-06-25 恒生电子股份有限公司 一种自动化脚本封装的方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140289325A1 (en) * 2013-03-20 2014-09-25 Palo Alto Research Center Incorporated Ordered-element naming for name-based packet forwarding
CN106339479A (zh) * 2016-08-30 2017-01-18 深圳市金立通信设备有限公司 一种图片命名方法及终端
CN107463683A (zh) * 2017-08-09 2017-12-12 上海壹账通金融科技有限公司 代码元素的命名方法及终端设备
CN109992266A (zh) * 2017-12-29 2019-07-09 阿里巴巴集团控股有限公司 一种界面元素的处理方法和装置
CN110399586A (zh) * 2019-07-31 2019-11-01 深圳前海微众银行股份有限公司 web界面元素的自动化处理方法、装置、设备及介质
CN111291208A (zh) * 2020-05-09 2020-06-16 支付宝(杭州)信息技术有限公司 前端页面元素的命名方法、装置及电子设备

Also Published As

Publication number Publication date
CN112307235A (zh) 2021-02-02
CN111291208A (zh) 2020-06-16
CN112307235B (zh) 2024-02-20
CN111291208B (zh) 2020-11-10

Similar Documents

Publication Publication Date Title
WO2023060795A1 (fr) Procédé et appareil d'extraction automatique de mot-clé, et dispositif et support de stockage
US11544459B2 (en) Method and apparatus for determining feature words and server
WO2017107566A1 (fr) Procédé et système d'extraction basés sur une similarité de vecteur de mot
US20200081899A1 (en) Automated database schema matching
CN113011533A (zh) 文本分类方法、装置、计算机设备和存储介质
US11645475B2 (en) Translation processing method and storage medium
WO2021068683A1 (fr) Procédé et appareil pour la génération d'expression regulière, serveur et support de stockage lisible par ordinateur
CN109446885B (zh) 一种基于文本的元器件识别方法、系统、装置和存储介质
CN110619051B (zh) 问题语句分类方法、装置、电子设备及存储介质
CN110334209B (zh) 文本分类方法、装置、介质及电子设备
CN110162771B (zh) 事件触发词的识别方法、装置、电子设备
WO2021051864A1 (fr) Procédé et appareil d'extension de dictionnaire, dispositif électronique et support de stockage
WO2020114100A1 (fr) Procédé et appareil de traitement d'informations, et support d'enregistrement informatique
CN103678684A (zh) 一种基于导航信息检索的中文分词方法
CN112632226B (zh) 基于法律知识图谱的语义搜索方法、装置和电子设备
US11790174B2 (en) Entity recognition method and apparatus
WO2020199595A1 (fr) Procédé et dispositif de classification de long texte utilisant un modèle de sac de mots, appareil informatique et support de stockage
CN109063184B (zh) 多语言新闻文本聚类方法、存储介质及终端设备
WO2024109619A1 (fr) Procédé et appareil d'identification de données sensibles, dispositif et support de stockage informatique
CN109857957B (zh) 建立标签库的方法、电子设备及计算机存储介质
US8224642B2 (en) Automated identification of documents as not belonging to any language
CN111723192A (zh) 代码推荐方法和装置
WO2021227951A1 (fr) Dénomination d'élément de page d'extrémité avant
CN111325033A (zh) 实体识别方法、装置、电子设备及计算机可读存储介质
JP2004355224A (ja) 対訳表現抽出装置、対訳表現抽出方法、および対訳表現抽出プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21803862

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21803862

Country of ref document: EP

Kind code of ref document: A1