WO2021230469A1 - Procédé de recommandation d'articles - Google Patents

Procédé de recommandation d'articles Download PDF

Info

Publication number
WO2021230469A1
WO2021230469A1 PCT/KR2021/001532 KR2021001532W WO2021230469A1 WO 2021230469 A1 WO2021230469 A1 WO 2021230469A1 KR 2021001532 W KR2021001532 W KR 2021001532W WO 2021230469 A1 WO2021230469 A1 WO 2021230469A1
Authority
WO
WIPO (PCT)
Prior art keywords
items
item
correlation
data
processor
Prior art date
Application number
PCT/KR2021/001532
Other languages
English (en)
Korean (ko)
Inventor
이대호
박건홍
Original Assignee
주식회사 세진마인드
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 세진마인드 filed Critical 주식회사 세진마인드
Publication of WO2021230469A1 publication Critical patent/WO2021230469A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/18Legal services
    • G06Q50/184Intellectual property management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/18Legal services

Definitions

  • the present invention relates to a method of recommending an item using a computing device, and more particularly, to a method of providing a list of recommended items by processing the frequency of designation of accompanying items.
  • designated goods determine the scope of trademark rights. Therefore, it is important to select a designated product so that the product that the applicant intends to use can be sufficiently protected by the trademark right.
  • the present disclosure has been devised in response to the background art described above, and an object of the present disclosure is to provide a method for recommending a designated product.
  • the computer program includes instructions for causing one or more processors to perform the following steps: generating correlation data between items; extracting one or more items corresponding to the first item; and generating a list of recommended items for the first item based on the correlation data between the items and the extracted one or more items. may include.
  • the method of claim 1 wherein the generating of the correlation data between the items comprises: increasing a degree of correlation between a plurality of items corresponding to the same metadata; may include.
  • the metadata may include at least one of a trademark application number and an applicant code.
  • increasing the correlation between the plurality of items may include: recognizing a class of each of the plurality of items; and when increasing the correlation between items of different classes, assigning weights; may include.
  • the generating of the correlation data between the items may include: generating a representative data group including one or more similar representative data; and increasing a degree of correlation between a plurality of items corresponding to the representative data group. may include.
  • the generating of the representative data group may include: calculating a degree of similarity for each of one or more representative data included in the representative data group; and assigning a weight corresponding to each of the one or more representative data based on the degree of similarity; may include.
  • the generating of the correlation data between the items may include: determining one or more items having similar properties to an item to increase the correlation among a plurality of items included in the item database; and increasing the correlation with respect to one or more items having similar properties. may include.
  • the determining of the one or more items having similar properties may include: determining an item most similar to the item for which the correlation is to be increased; and determining the most similar item as one or more items having similar attributes. may include.
  • the increasing of the correlation with respect to the one or more items having similar attributes may include: calculating a similarity between the item for which the correlation is to be increased and the one or more items having similar attributes; and assigning a weight to each of the one or more items having similar attributes based on the similarity; may include.
  • the one or more items may be extracted based on a distance between correlation vectors calculated using the correlation data.
  • one or more items corresponding to the first item may be extracted using a similarity calculation model between items.
  • the correlation data may be a compressed sparse matrix.
  • the correlation data may include a predicted correlation vector generated based on metadata related to the item.
  • generating the list of recommended items may include: recognizing correlation data related to a first item; recognizing the class of the first item and the class of each of the one or more items in which a correlation exists between the class of the first item and the first item; and assigning weights to one or more items having different classes from the first item.
  • generating the list of recommended items may include: recognizing correlation data related to a first item; recognizing the class of the first item and the class of each of the one or more items for which a correlation exists between the class of the first item and the first item; and generating the list of recommended items based on a list of one or more items having the same class as the class of the first item. may include.
  • the generating of the list of recommended items may include: recognizing correlation data related to a first item; recognizing the class of the first item and the class of each of the one or more items for which a correlation exists between the class of the first item and the first item; and generating the list of recommended items based on a list of one or more items having a different class than the class of the first item. may include.
  • the computing device may include: a processor; and memory; including, wherein the processor generates correlation data between items, extracts one or more items corresponding to a first item, and based on the correlation data between the items and the extracted one or more items, the first A list of recommended items for the item may be generated.
  • the designated product recommendation method it is possible to provide a list of recommended items highly related to the currently selected designated product.
  • FIG. 1 is a block diagram of a computing device for performing a designated product recommendation method according to an embodiment of the present disclosure.
  • FIG 3 illustrates metadata and representative data according to some embodiments of the present disclosure.
  • FIG. 5 is a schematic diagram illustrating a network function according to an embodiment of the present disclosure.
  • FIG. 6 illustrates a list of recommended items according to some embodiments of the present disclosure.
  • FIG. 7 is a flowchart illustrating a process in which a processor performs an item recommendation method according to some embodiments of the present disclosure.
  • FIG. 8 is a flowchart illustrating a process in which a processor performs a method of generating correlation data between items according to some embodiments of the present disclosure.
  • FIG. 9 is a flowchart illustrating a process in which a processor generates correlation data between items according to some embodiments of the present disclosure.
  • FIG. 10 is a flowchart illustrating a process in which a processor generates a representative data group according to some embodiments of the present disclosure.
  • FIG. 11 is a flowchart illustrating a process in which a processor performs a method of generating correlation data between items according to some embodiments of the present disclosure.
  • FIG. 12 is a flowchart illustrating a process in which a processor determines one or more items having similar attributes to an item to increase a correlation, according to some embodiments of the present disclosure.
  • FIG. 13 is a flowchart illustrating a process in which a processor generates a list of recommended items according to some embodiments of the present disclosure.
  • FIG. 14 is a flowchart illustrating a process in which a processor generates a list of recommended items according to some embodiments of the present disclosure.
  • 15 is a flowchart illustrating a process in which a processor generates a list of recommended items according to some embodiments of the present disclosure.
  • 16 is a simplified, general schematic diagram of an exemplary computing environment in which embodiments of the present disclosure may be implemented.
  • a component can be, but is not limited to being, a process running on a processor, a processor, an object, a thread of execution, a program, and/or a computer.
  • an application running on a computing device and the computing device may be a component.
  • One or more components may reside within a processor and/or thread of execution.
  • a component may be localized within one computer.
  • a component may be distributed between two or more computers.
  • these components can execute from various computer readable media having various data structures stored therein.
  • Components may communicate via a network such as the Internet with another system, for example via a signal having one or more data packets (eg, data and/or signals from one component interacting with another component in a local system, distributed system, etc.) may communicate via local and/or remote processes depending on the data being transmitted).
  • a network such as the Internet with another system
  • one or more data packets eg, data and/or signals from one component interacting with another component in a local system, distributed system, etc.
  • a network function an artificial neural network, and a neural network may be used interchangeably.
  • FIG. 1 is a block diagram of a computing device for performing a designated product recommendation method according to an embodiment of the present disclosure.
  • the configuration of the computing device 100 shown in FIG. 1 is only a simplified example.
  • the computing device 100 may include other components for performing the computing environment of the computing device 100 , and only some of the disclosed components may configure the computing device 100 .
  • the computing device 100 may include a processor 110 and a memory 120 .
  • the processor 110 may include one or more cores, and a central processing unit (CPU) of a computing device, a general purpose graphics processing unit (GPGPU), and a tensor processing unit (TPU). unit) and the like, and may include a processor for data analysis and deep learning.
  • the processor 110 may read a computer program stored in the memory 130 and perform data processing for machine learning according to an embodiment of the present disclosure. According to an embodiment of the present disclosure, the processor 110 may perform an operation for learning the neural network.
  • the processor 110 for learning of the neural network such as processing input data for learning in deep learning (DL), extracting features from the input data, calculating an error, updating the weight of the neural network using backpropagation calculations can be performed.
  • DL deep learning
  • At least one of a CPU, a GPGPU, and a TPU of the processor 110 may process learning of a network function.
  • the CPU and the GPGPU can process learning of a network function and data classification using the network function.
  • learning of a network function and data classification using the network function may be processed by using the processors of a plurality of computing devices together.
  • the computer program executed in the computing device according to an embodiment of the present disclosure may be a CPU, GPGPU, or TPU executable program.
  • the memory 120 may store any type of information generated or determined by the processor 110 and any type of information received by the communication unit (not shown).
  • the memory 120 is a flash memory type, a hard disk type, a multimedia card micro type, or a card type memory (eg, SD or XD memory, etc.), Random Access Memory (RAM), Static Random Access Memory (SRAM), Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Programmable Read (PROM) -Only Memory), a magnetic memory, a magnetic disk, and an optical disk may include at least one type of storage medium.
  • the computing device 100 may operate in relation to a web storage that performs a storage function of the memory 120 on the Internet.
  • the description of the above-described memory is only an example, and the present disclosure is not limited thereto.
  • a communication unit (not shown) according to an embodiment of the present disclosure includes a Public Switched Telephone Network (PSTN), x Digital Subscriber Line (xDSL), Rate Adaptive DSL (RADSL), Multi Rate DSL (MDSL), VDSL (A variety of wired communication systems such as Very High Speed DSL), Universal Asymmetric DSL (UADSL), High Bit Rate DSL (HDSL), and Local Area Network (LAN) can be used.
  • PSTN Public Switched Telephone Network
  • xDSL Digital Subscriber Line
  • RADSL Rate Adaptive DSL
  • MDSL Multi Rate DSL
  • VDSL A variety of wired communication systems such as Very High Speed DSL), Universal Asymmetric DSL (UADSL), High Bit Rate DSL (HDSL), and Local Area Network (LAN) can be used.
  • LAN Local Area Network
  • the communication unit (not shown) presented herein is CDMA (Code Division Multi Access), TDMA (Time Division Multi Access), FDMA (Frequency Division Multi Access), OFDMA (Orthogonal Frequency Division Multi Access), SC-FDMA (A variety of wireless communication systems may be used, such as Single Carrier-FDMA) and other systems.
  • CDMA Code Division Multi Access
  • TDMA Time Division Multi Access
  • FDMA Frequency Division Multi Access
  • OFDMA Orthogonal Frequency Division Multi Access
  • SC-FDMA A variety of wireless communication systems may be used, such as Single Carrier-FDMA and other systems.
  • the communication unit may be configured regardless of its communication mode, such as wired and wireless, and may be composed of various communication networks such as a personal area network (PAN) and a wide area network (WAN).
  • PAN personal area network
  • WAN wide area network
  • the network may be a well-known World Wide Web (WWW), and may use a wireless transmission technology used for short-range communication such as Infrared Data Association (IrDA) or Bluetooth.
  • IrDA Infrared Data Association
  • Bluetooth Bluetooth
  • the correlation data 200 may include information on the degree of correlation 220 between arbitrary items.
  • the correlation data 200 may include correlation information between items belonging to one class or a plurality of classes.
  • the correlation data 200 may be expressed as a two-dimensional or more matrix including a plurality of correlation information.
  • the degree of correlation may be data expressing the degree to which a plurality of items have been related to the same fact or event during a previous point in time.
  • the degree of correlation may be calculated by synthesizing indices related to a plurality of items.
  • the degree of correlation of the plurality of items may be calculated based on a correlation index between the individual items and weight indicators between the individual items.
  • the relevance indicator between individual items may be an indicator regarding the frequency with which the individual items are related to the same event.
  • the relevance index between individual items may be recognized based on at least one of metadata and representative data related to the event.
  • the metadata may include identification information of an event, identification information of a subject related to the event, information on when the event occurs, and the like.
  • Representative data related to an event may include an event name, image, video, thumbnail, and voice representing the event.
  • the metadata may include applicant name information, applicant code information, application date, application publication date, registration date, and the like.
  • the representative data may include a trademark name expressed in text, an image, an image, an audio mark, and a thumbnail thereof.
  • the above-described example is merely an example of the types of metadata and representative data, and the types of metadata and representative data are not limited thereto.
  • the processor 110 may identify events having the same or similar metadata and representative data information, and may identify information on items related to the corresponding events.
  • the processor 110 may generate a relevance index based on the identified item information.
  • the relevance index may relate to an index in which individual items are related to the same event.
  • the weight index between individual items may be an index quantifying the similarity between intrinsic/non-essential properties of the individual items.
  • the intrinsic properties of the item may include intrinsic characteristics of the item, for example, whether the item is a good or a service, whether the item is tangible or intangible, and what object is the object of the item.
  • the non-essential attribute of the item may include a class to which the item belongs, identification information of a group of similar items, and the like.
  • the weight index of a plurality of items is determined by whether a common attribute possessed by the individual items exists, the degree of similarity between the attributes possessed by the individual items, whether the individual items appear within the same or similar event, It can be calculated using whether the items belong to the same class.
  • this is only an example of factors for determining the weight index, and the elements for determining the weight index are not limited thereto.
  • the processor 110 may further consider additional factors as well as the above-described examples in determining the weight index. Also, the processor 110 may selectively determine factors to be considered when determining the weight index. Accordingly, at least one factor affecting the determination of the weight index may be different depending on the execution time of the item recommendation method according to the present disclosure. For example, the processor 110 may determine the weight index based only on whether the attribute information of the item matches. Alternatively, the processor 110 may determine the weight index based on only the item class information. Alternatively, the processor 110 may determine the weight index by using all available weight index determining factors. In addition, with respect to the determination of the weight index, each element may have a different weight, and the weight of each element may be different depending on the execution time of the item recommendation method according to the present disclosure.
  • the weight index may be calculated using a weighted average of values of at least one element selected for determining the weight index.
  • the weighted average weight of each element may be different depending on the execution time of the item recommendation method according to the present disclosure.
  • the weight index may be a maximum value, a minimum value, and a median value of values of at least one element selected for determining the weight index.
  • the processor 110 may use various statistical techniques using the at least one factor to determine the weight index. The above-described method of determining the weight index is merely exemplary, and the method of determining the weight index is not limited thereto.
  • the processor 110 may calculate a degree of correlation between a plurality of items based on the relevance index and the weight index. For example, the processor 110 may derive a degree of correlation by applying a weight index to the relevance index. For example, the processor 110 may derive a degree of correlation based on a four rule operation between the relevance index and the weight index. Alternatively, the processor 110 may derive the correlation by modifying the weight index and applying it to the relevance index. That is, the processor 110 may apply a value obtained by performing a preset operation on the weight index to the relevance index. For example, the preset operation may be a reciprocal number or a formula prepared for specific types of items.
  • the correlation 220 included in the correlation data 200 may be a predicted correlation between items.
  • the predicted correlation may be generated using an eigenvalue decomposition (EVD) or a singular value decomposition (SVD) technique.
  • the processor 110 may calculate the raw correlation using the correlation index and the weight index by using the above-described correlation generation technique.
  • the processor 110 may generate predictive correlation data by applying at least one of EVD and SVD techniques to the generated raw correlation data including a plurality of raw correlations. When the SVD technique is applied as described above, the processor 110 may determine the generated prediction correlation data as the correlation data 200 .
  • the previously collected correlation data are finely adjusted for the various needs of applicants, it may not contain a general tendency for a specific designated product. Accordingly, as described above, if the SVD is applied to recommend an item to be specified together, a more generalized list of recommended items can be generated.
  • the processor 110 may change the degree of correlation between the first item and related items. That is, when one or more first items are input, the processor 110 may consider that an event has already occurred for all combinations of a plurality of input items. In this case, the processor 110 may increase or decrease the correlation for all combinations of the plurality of items. The processor 110 may generate a list of recommended items based on the degree of correlation to which the change is performed and the correlation data.
  • the correlation 220 between items may be determined based on one or more events associated with two or more items.
  • an event may mean a fact or phenomenon that occurs by using at least one of one or more items and data as the object.
  • data representing the object of the event and the event itself and metadata information about the event may be generated together.
  • the event may be the fact of viewing the item, the fact of visiting the item, or the filing of an intellectual property right for the item.
  • the event is a viewing of an item
  • information about the title of the item, the thumbnail of the item, the identification code of the item, the genre classification of the item, the viewing time of the item, the viewer of the item, etc. can occur
  • the event is an application for intellectual property rights for an item
  • information about the name of the item, representative data related to the item, the identification code (application number) of the application, the time of filing, the applicant, the identification code of the applicant, etc. may occur together.
  • Such information may be used to derive a degree of correlation between a plurality of items by being used to calculate a relevance index and a weight index when calculating the degree of correlation as described above.
  • the event may include an interaction with the corresponding POI.
  • the interaction may include a visit fact, a review fact for the POI, a review score for the POI, and the like.
  • object information of the POI, thumbnail information of the POI, a hash value of review data for the POI, and the like may be generated together.
  • the processor 110 may calculate a correlation based on such data.
  • the item 210 may be an object that is a target of an event.
  • an item may have intrinsic and non-essential attributes.
  • the essential attribute of the item may mean an attribute defined without an external attribute definition procedure for the item.
  • the non-essential attribute of the item may mean an attribute defined by an external attribute definition for the item.
  • the intrinsic property of an item may include whether the item is a good or a service, whether the item is tangible or intangible, what object is the target of the item, and the like.
  • the non-essential attribute of the item may include identification information of an externally defined item class or similar item group.
  • the essential and non-essential properties of these items may be pre-stored in the memory 120 and read by the processor 110 .
  • the processor 110 may recognize items related to each of a plurality of events and attributes of the items in order to determine a weight index used to calculate a degree of correlation between the plurality of items.
  • the processor 110 may calculate a weight index between items based on both intrinsic and non-essential properties of the items. However, at least one attribute selected by the processor 110 to calculate the weight index may be variable. Accordingly, the properties of items used to calculate the weight index may be different depending on the execution time of the item recommendation method according to the present disclosure.
  • an item may be associated with at least one of any class and any item group identification information. That is, at least one of the first class and the first similar item group may have items A to F, and at least one of the second class and the second similar item group may include items G to item Z.
  • the set of items included in the class and the set of items included in the similar item group are not mutually exclusive. That is, the first item may be included in one or more item groups or one or more classes.
  • the item recommendation method according to the present disclosure may be related to the field of trademark application.
  • the item 210 may be a designated product and a designated product name.
  • the item 210 may be recognized from an item database pre-stored in the memory 120 .
  • the item 210 may be input by a user using the item recommendation method according to the present disclosure through an input device (not shown) related to the computing device 100 .
  • contents regarding the item are only exemplary contents regarding the item, and contents regarding the properties of the item, the relationship between the item and the class and the group of similar items are not limited thereto.
  • the item 210 may be associated with at least one of a class or a group of similar items.
  • a class or similar item group may be defined as a type of identification data assigned to items having similar properties.
  • the attribute of an item is used to provide class and similar item group information
  • the attribute of the used item is not necessarily dependent on the essential attribute of the above-described item. Accordingly, in the present disclosure, non-essential properties of items such as class and similar item group information may be arbitrarily defined.
  • a class and a similar item group may be identification information for a set of one or more items having the same and similar essential properties.
  • a class and a similar item group may be identification information for a set of one or more items having the same and similar arbitrary properties.
  • the class or similar item group may be a set of items in which a similarity between items calculated based on a distance between correlation vectors satisfies a preset condition.
  • the correlation vector may include correlation values between an arbitrary item and other items. Referring back to FIG. 2 , the correlation vector for item A may be (0, 1, 4, 5, 14, 7). Also, the correlation vector for the item E may be (14, 6, 22, 1, 0, 4).
  • the processor 110 may determine items having the shortest distance between the correlation vectors as the items corresponding to the first item.
  • the correlation vector may mean the frequency of designation of one item with another item. Accordingly, when the correlation vectors are similar (ie, when the distance between the correlation vectors is short), the two items can be regarded as similar items. Accordingly, since the correlation vector for similar items can be reflected in the list of recommended items, it is possible to further three-dimensionally protect trademark rights.
  • Non-essential attributes such as class and similar item group information for an item may be stored in the memory 120 and read by the processor 110 . Accordingly, when calculating the degree of correlation between the plurality of items, the processor 110 recognizes each of the plurality of items, and reads, if necessary, at least one or more class and similar item group information for each of the items from the memory 120 . can be recognised.
  • the processor 110 may determine the weight index by using whether the non-essential properties are the same, the similarity between the non-essential properties, and the like. Specifically, the processor 110 may recognize a measure of identity and similarity between non-essential attributes (ie, degree of similarity). The similarity between these non-essential attributes may be pre-stored in the memory 120 and read by the processor 110 . Alternatively, when the class or similar item group information is text information, the processor 110 performs a conventional calculation method of similarity between strings, such as Levenshtein Distance, Hamming Distance, Smith-Waterman, and S ⁇ rensen-Dice Coefficient, or a neural for text processing.
  • a measure of identity and similarity between non-essential attributes ie, degree of similarity
  • the similarity between these non-essential attributes may be pre-stored in the memory 120 and read by the processor 110 .
  • the processor 110 performs a conventional calculation method of similarity between strings, such as Levenshtein Distance, Hamming Distance, Smith
  • the similarity between non-essential attributes can be calculated using a network-based word similarity technique.
  • the processor 110 may calculate the similarity between information of a class or similar item group in the form of text information based on a neural network that calculates a similarity between texts in which non-essential properties of items are expressed.
  • the processor 110 may calculate the weight index by using the similarity between the calculated non-essential attributes. For example, the processor 110 may assign a higher weight as the non-essential properties are similar, and conversely, give a lower weight as the non-essential properties are different. Conversely, in order to recommend several items with different non-essential properties, the processor 110 may give a lower weight as the non-essential properties are similar and, conversely, give a higher weight as the non-essential properties are different. As described above, the processor 110 may perform such a series of selections without direct instruction from the user. For example, when the user gives “multi-class selection”, the processor 110 may give a higher weight index based on this input as the non-essential properties are different.
  • the processor 110 automatically recognizes this as “multi-class selection” and weights it as non-essential properties are different. indicators can be given high. Conversely, if the classes of the plurality of first items selected by the user to receive the item recommendation are all the same, the processor 110 automatically recognizes this as “single class selection” and assigns a higher weight index as non-essential properties are similar. You can do it.
  • the processor 110 may calculate correlation and correlation data so as to generate an item recommendation list suitable for a user's needs.
  • a class can contain a very large number of items.
  • the correlation data 200 when the correlation data 200 is expressed as a two-dimensional or more matrix, most of the elements of the matrix may be zero. In this case, the correlation data 200 may be a sparse matrix.
  • the processor 110 may express the correlation data 200 as a compressed sparse matrix.
  • the processor 110 may store the correlation data 200 using a dictionary key, a list of lists (LIL), a coordinate list (COO), a compressed sparse row (CSR), or the like.
  • LIL list of lists
  • COO coordinate list
  • CSR compressed sparse row
  • the processor 110 may save memory space by compressing and expressing the correlation data 200 .
  • the item 210 may correspond to a designated product. Since there are hundreds of thousands of known designated product names, memory space can be greatly saved by using the sparse matrix compression method according to some embodiments of the present disclosure.
  • FIG 3 illustrates metadata and representative data according to some embodiments of the present disclosure.
  • the metadata 300 may mean secondary information related to an event.
  • the metadata may include identification information of an event, identification information on a subject related to the event, information on when the event occurs, and the like.
  • the metadata 300 may include an applicant, an application date, an application number, and the like.
  • the processor 110 may increase the relevance index for calculating the relevance based on the identity and similarity of the meta data. For example, the processor 110 may recognize one or more events corresponding to the same metadata. Here, the processor 110 may recognize all one or more items related to one or more events corresponding to the same metadata. As another example, the processor 110 may determine a degree of similarity between meta data to recognize all one or more items related to one or more events related to similar meta data. Specifically, the processor 110 may calculate the similarity between meta data using the method for determining the similarity of text and images and the neural network-based methods as described above with reference to FIG. 2 . The above-described example is merely an example of a method for the processor 110 to recognize one or more items from one or more events corresponding to the same and similar metadata, and the method for recognizing the one or more items is not limited thereto.
  • the processor 110 may increase the relevance index for the recognized combination of the one or more items. For example, when a plurality of items are related within the same event, the processor 110 may increase the relevance index for all combinations of the plurality of items. Alternatively, the processor 110 may increase the relevance index for all combinations of a plurality of items related to a plurality of events corresponding to similar metadata. When increasing the relevance index between items related to a plurality of events corresponding to the similar metadata, the processor 110 may apply the similarity between the metadata of each event. For example, when the similarity between the first metadata related to the first event and the second metadata related to the second event is 0.6, the processor 110 may perform the first item related to the first event and the second metadata related to the second event. When increasing the relevance index between 2 items, 0.6 can be applied (multiplied) to 1 to increase 0.6.
  • the processor 110 may regard events having similar metadata as one event. In this case, the processor 110 may increase the relevance index only once for all items related to the event corresponding to the same and similar metadata.
  • the processor 110 may consider each event as a separate event and increase the relevance index overlappingly.
  • the processor 110 may recognize a plurality of trademark application cases having the same similar applicant, the same and similar applicant code, the same and similar application number, and the same and similar application time.
  • the processor 110 may recognize all items included in a plurality of recognized trademark application cases.
  • the processor 110 may calculate a degree of similarity between applicant information, a degree of similarity between applicant codes, a degree of similarity between application numbers, and a degree of similarity between filing times in a plurality of trademark application cases.
  • the processor 110 may increase the relevance index for all combinations of the recognized items. As described above, in order to increase the relevance index, the processor 110 may recognize all the trademark application cases as a single trademark application event, and increase the relevance index by 1 for all combinations of related items. Alternatively, the processor 110 may increase the relevance index for each individual trademark event. When the processor 110 increases the relevance index for a combination of items for each individual trademark event, the processor 110 may increase the relevance index for the combination of items by reflecting the similarity value between the computed metadata. have.
  • the method for increasing the correlation between items is not limited thereto.
  • the degree of correlation and the correlation data may be different even when a multi-class application for a trademark and a plurality of single-class applications for a trademark are performed for substantially the same purpose.
  • the degree of correlation can be increased for each of a plurality of applications (events). Accordingly, correlation data 200 suitable for overlapping protection of trademarks may be generated.
  • the representative data 400 may be data serving to represent the event itself.
  • representative data may include an event name, image, video, thumbnail, voice, etc. representing the event.
  • the representative data may be multimedia data in which the event name, image, video, thumbnail, voice, and the like are combined. Since the above description is merely an example of the type of representative data, the type of representative data is not limited thereto.
  • the processor 110 may generate a representative data group.
  • the processor 110 may determine the degree of similarity between the representative data to generate the representative data group.
  • the processor 110 may determine a degree of similarity between representative data based on a neural network model that detects a degree of similarity between images or words.
  • the processor 110 may include representative data in one representative data group. Since the above description is only an example of a method for forming the representative data group, the method for generating the representative data group is not limited thereto.
  • the representative data 400 may be a mark in the form of an image.
  • the representative data 400 may be a brand name indicated by 'CHANEL'.
  • the representative data may include both a mark and a name in the form of the illustrated image.
  • the representative data may include only a core part of the data (hereinafter, referred to as 'subject').
  • 'subject' a core part of the data
  • the processor 110 may recognize only the 'CHANEL' portion excluding the appended figures as the representative data 400 rather than recognizing the entire image of the corresponding mark as the representative data 400 .
  • the processor 110 may generate a representative data group and increase correlation between items corresponding to the representative data group.
  • the representative data group may mean a set of similar representative data. Referring to FIG. 3 , the illustrated 'CHANEL' and similar marks may be grouped into one representative data group.
  • the processor 110 may determine the similarity between the representative data to generate the representative data group. For example, the processor 110 may determine the similarity between the representative data based on a neural network model that detects the similarity of images or words. When the determined similarity satisfies a preset criterion, the processor 110 may include representative data in one representative data group.
  • the processor 110 may recognize items designated in each of the trademark applications included in the representative data group, and increase a correlation index between the items. In this case, the processor 110 may recognize the trademark applications included in the representative data group as one trademark application, and increase the relevance index by 1 for all combinations of each of the items.
  • the processor 110 may overlap the relevance index by reflecting the degree of similarity between the calculated representative data for each trademark application, regardless of whether the representative data group is generated. For example, when the similarity between the representative data of the first application and the representative data of the second application is 0.6, the processor 110 sets the relevance index by 0.6 in a combination between the items to which one or more items included in the second application are related. can increase
  • the reciprocal number of the degree of similarity between the representative data may be reflected in the degree of association index. That is, when the similarity between the representative data is 0.2, when the relevance index is increased, 5, which is the reciprocal of the similarity, is multiplied by the reciprocity index increase value 1 to increase 5. In this case, the amount of increase in the correlation between the trademark applications with low similarity between the representative data will be larger.
  • the correlation data 200 that meets the needs of the applicant can be generated according to which metadata is used to increase the correlation in the item recommendation method according to the present disclosure.
  • correlation data 200 suitable for overlapping protection of trademarks may be generated by increasing the correlation for each application.
  • the processor 110 may determine to reflect the similarity in increasing the relevance index. Conversely, when the applicant desires broad protection for trademark rights, the processor 110 may determine to consider all applications included in the representative data group as a single application.
  • FIG. 4 illustrates an item, an item class, and a similar item group in accordance with some embodiments of the present disclosure.
  • an item may have intrinsic properties and non-essential properties, and information about a class and a group of similar items is related to the non-essential properties of the items.
  • information about a class and a group of similar items is not necessarily determined based on essential properties of a plurality of items. Accordingly, it should be understood that a class and a group of similar items may be determined based on arbitrary properties of the items regardless of the description below.
  • the items (designated products) listed on the right side of the table are all related to the distribution business. Therefore, the common essential property of items can be said to be "distribution business".
  • the processor 110 may determine the "35 class" corresponding to "distribution business" as a class.
  • the processor 110 may recognize a class of each of the items that are designated to be accompanied, and when items belonging to different classes are designated together, the correlation value may be increased by weighting it.
  • the processor 110 may determine the weight index by using whether the non-essential properties are the same, the similarity between the non-essential properties, and the like. Specifically, the processor 110 may recognize a measure of identity and similarity between non-essential attributes (ie, degree of similarity). The similarity between these non-essential attributes may be pre-stored in the memory 120 and read by the processor 110 . Alternatively, when the class or similar item group information is text information, the processor 110 performs a conventional calculation method of similarity between strings, such as Levenshtein Distance, Hamming Distance, Smith-Waterman, and S ⁇ rensen-Dice Coefficient, or a neural for text processing.
  • a measure of identity and similarity between non-essential attributes ie, degree of similarity
  • the similarity between these non-essential attributes may be pre-stored in the memory 120 and read by the processor 110 .
  • the processor 110 performs a conventional calculation method of similarity between strings, such as Levenshtein Distance, Hamming Distance, Smith
  • the similarity between non-essential attributes can be calculated using a network-based word similarity technique.
  • the processor 110 may calculate the similarity between information of a class or similar item group in the form of text information based on a neural network that calculates a similarity between meanings implied by non-essential properties of items. Since the above description is merely exemplary, a method of calculating the similarity between non-essential information, particularly a class and a group of similar items, should not be limited to the above-described example.
  • the processor 110 may calculate a weight index by using the similarity between the calculated non-essential attributes. For example, the processor 110 may assign a higher weight as the non-essential properties are similar, and conversely assign a lower weight as the non-essential properties are different. Conversely, in order to recommend several items with different non-essential properties, the processor 110 may give a lower weight as the non-essential properties are similar and, conversely, give a higher weight as the non-essential properties are different. As described above, the processor 110 may perform such a series of selections without direct instruction from the user. For example, when the user gives “multi-class selection”, the processor 110 may give a higher weight index based on this input as the non-essential properties are different.
  • the processor 110 automatically recognizes this as “multi-class selection” and weights it as non-essential properties are different. indicators can be given high. Conversely, if the classes of the plurality of first items selected by the user to receive the item recommendation are all the same, the processor 110 automatically recognizes this as “single class selection” and assigns a higher weight index as non-essential properties are similar. You can do it.
  • the processor 110 may calculate correlation and correlation data so as to generate an item recommendation list suitable for a user's needs.
  • FIG. 5 is a schematic diagram illustrating a network function according to an embodiment of the present disclosure.
  • a neural network may be composed of a set of interconnected computational units, which may generally be referred to as nodes. These nodes may also be referred to as neurons.
  • a neural network is configured by including at least one or more nodes. Nodes (or neurons) constituting the neural networks may be interconnected by one or more links.
  • one or more nodes connected through a link may relatively form a relationship between an input node and an output node.
  • the concepts of an input node and an output node are relative, and any node in an output node relationship with respect to one node may be in an input node relationship in a relationship with another node, and vice versa.
  • an input node-to-output node relationship may be created around a link.
  • One or more output nodes may be connected to one input node through a link, and vice versa.
  • the value of the data of the output node may be determined based on data input to the input node.
  • a link interconnecting the input node and the output node may have a weight.
  • the weight may be variable, and may be changed by the user or algorithm in order for the neural network to perform a desired function. For example, when one or more input nodes are interconnected to one output node by respective links, the output node sets values input to input nodes connected to the output node and links corresponding to the respective input nodes. An output node value may be determined based on the weight.
  • one or more nodes are interconnected through one or more links to form an input node and an output node relationship in the neural network.
  • the characteristics of the neural network may be determined according to the number of nodes and links in the neural network, the correlation between the nodes and the links, and the value of a weight assigned to each of the links. For example, when the same number of nodes and links exist and there are two neural networks having different weight values of the links, the two neural networks may be recognized as different from each other.
  • a neural network may consist of a set of one or more nodes.
  • a subset of nodes constituting the neural network may constitute a layer.
  • Some of the nodes constituting the neural network may configure one layer based on distances from the initial input node.
  • a set of nodes having a distance n from the initial input node may constitute n layers.
  • the distance from the initial input node may be defined by the minimum number of links that must be traversed to reach the corresponding node from the initial input node.
  • the definition of such a layer is arbitrary for description, and the order of the layer in the neural network may be defined in a different way from the above.
  • a layer of nodes may be defined by a distance from the final output node.
  • the initial input node may mean one or more nodes to which data is directly input without going through a link in a relationship with other nodes among nodes in the neural network.
  • it may mean nodes that do not have other input nodes connected by a link.
  • the final output node may refer to one or more nodes that do not have an output node in relation to other nodes among nodes in the neural network.
  • the hidden node may mean nodes constituting the neural network other than the first input node and the last output node.
  • the neural network according to an embodiment of the present disclosure may be a neural network in which the number of nodes in the input layer may be the same as the number of nodes in the output layer, and the number of nodes decreases and then increases again as the input layer progresses to the hidden layer.
  • the neural network according to another embodiment of the present disclosure may be a neural network in which the number of nodes in the input layer may be less than the number of nodes in the output layer, and the number of nodes decreases as the number of nodes progresses from the input layer to the hidden layer. have.
  • the neural network according to another embodiment of the present disclosure may be a neural network in which the number of nodes in the input layer may be greater than the number of nodes in the output layer, and the number of nodes increases as the number of nodes progresses from the input layer to the hidden layer.
  • the neural network according to another embodiment of the present disclosure may be a neural network in a combined form of the aforementioned neural networks.
  • a deep neural network may refer to a neural network including a plurality of hidden layers in addition to an input layer and an output layer.
  • Deep neural networks can be used to identify the latent structures of data. In other words, it can identify the potential structure of photos, texts, videos, voices, and music (e.g., what objects are in the photos, what the text and emotions are, what the texts and emotions are, etc.) .
  • Deep neural networks include convolutional neural networks (CNNs), recurrent neural networks (RNNs), auto encoders, generative adversarial networks (GANs), and restricted boltzmann machines (RBMs).
  • Deep neural network a deep belief network (DBN), a Q network, a U network, a Siamese network, a Generative Adversarial Network (GAN), and the like.
  • DBN deep belief network
  • Q Q network
  • U U
  • Siamese Siamese network
  • GAN Generative Adversarial Network
  • the network function may include an autoencoder.
  • the auto-encoder may be a kind of artificial neural network for outputting output data similar to input data.
  • the auto encoder may include at least one hidden layer, and an odd number of hidden layers may be disposed between the input/output layers.
  • the number of nodes in each layer may be reduced from the number of nodes in the input layer to an intermediate layer called the bottleneck layer (encoding), and then expanded symmetrically with reduction from the bottleneck layer to the output layer (symmetrical to the input layer).
  • the auto-encoder can perform non-linear dimensionality reduction.
  • the number of input layers and output layers may correspond to a dimension after preprocessing the input data.
  • the number of nodes of the hidden layer included in the encoder may have a structure that decreases as the distance from the input layer increases. If the number of nodes in the bottleneck layer (the layer with the fewest nodes between the encoder and the decoder) is too small, a sufficient amount of information may not be conveyed, so a certain number or more (e.g., more than half of the input layer, etc.) ) may be maintained.
  • the neural network may be trained using at least one of supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning. Learning of the neural network may be a process of applying knowledge for the neural network to perform a specific operation to the neural network.
  • a neural network can be trained in a way that minimizes output errors.
  • iteratively inputs the training data to the neural network, calculates the output of the neural network and the target error for the training data, and calculates the error of the neural network from the output layer of the neural network to the input layer in the direction to reduce the error. It is a process of updating the weight of each node in the neural network by backpropagation in the direction.
  • learning data in which the correct answer is labeled in each learning data is used (ie, labeled learning data), and in the case of comparative learning, the correct answer may not be labeled in each learning data.
  • the learning data in the case of teacher learning regarding data classification may be data in which categories are labeled in each of the learning data.
  • Labeled training data is input to the neural network, and an error can be calculated by comparing the output (category) of the neural network with the label of the training data.
  • an error may be calculated by comparing the input training data with the neural network output. The calculated error is back propagated in the reverse direction (ie, from the output layer to the input layer) in the neural network, and the connection weight of each node of each layer of the neural network may be updated according to the back propagation.
  • a change amount of the connection weight of each node to be updated may be determined according to a learning rate.
  • the computation of the neural network on the input data and the backpropagation of errors can constitute a learning cycle (epoch).
  • the learning rate may be applied differently depending on the number of repetitions of the learning cycle of the neural network. For example, in the early stage of training of a neural network, a high learning rate can be used to enable the neural network to quickly acquire a certain level of performance, thereby increasing efficiency, and using a low learning rate at the end of learning can increase accuracy.
  • the training data may be a subset of real data (that is, data to be processed using the trained neural network), and thus the error on the training data is reduced, but the error on the real data is reduced.
  • Overfitting is a phenomenon in which errors on actual data increase by over-learning on training data as described above. For example, a phenomenon in which a neural network that has learned a cat by seeing a yellow cat does not recognize that it is a cat when it sees a cat other than yellow may be a type of overfitting. Overfitting can act as a cause of increasing errors in machine learning algorithms. In order to prevent such overfitting, various optimization methods can be used. In order to prevent overfitting, methods such as increasing the training data, regularization, and dropout that deactivate some of the nodes in the network during the learning process, and the use of a batch normalization layer are applied. can
  • a computer-readable medium storing a data structure is disclosed according to an embodiment of the present disclosure.
  • the data structure may refer to the organization, management, and storage of data that enables efficient access and modification of data.
  • a data structure may refer to an organization of data to solve a specific problem (eg, data retrieval, data storage, and data modification in the shortest time).
  • a data structure may be defined as a physical or logical relationship between data elements designed to support a particular data processing function.
  • the logical relationship between data elements may include a connection relationship between user-defined data elements.
  • Physical relationships between data elements may include actual relationships between data elements physically stored on a computer-readable storage medium (eg, persistent storage).
  • a data structure may specifically include a set of data, relationships between data, and functions or instructions applicable to data.
  • a computing device can perform an operation while using the resource of the computing device to a minimum. Specifically, the computing device may increase the efficiency of operations, reads, insertions, deletions, comparisons, exchanges, and retrievals through effectively designed data structures.
  • a data structure may be classified into a linear data structure and a non-linear data structure according to the type of the data structure.
  • the linear data structure may be a structure in which only one piece of data is connected after one piece of data.
  • the linear data structure may include a list, a stack, a queue, and a deck.
  • a list may mean a set of data in which an order exists internally.
  • the list may include a linked list.
  • the linked list may be a data structure in which data is linked in such a way that each data is linked in a line with a pointer. In a linked list, a pointer may contain information about a link with the next or previous data.
  • a linked list may be expressed as a single linked list, a doubly linked list, or a circularly linked list according to a shape.
  • a stack can be a data enumeration structure with limited access to data.
  • a stack can be a linear data structure in which data can be processed (eg, inserted or deleted) at only one end of the data structure.
  • the data stored in the stack may be a data structure LIFO-Last in First Out.
  • a queue is a data listing structure that allows limited access to data. Unlike a stack, the queue may be a data structure that comes out later (FIFO-First in First Out) as data stored later.
  • a deck can be a data structure that can process data at either end of the data structure.
  • the nonlinear data structure may be a structure in which a plurality of data is connected after one data.
  • the nonlinear data structure may include a graph data structure.
  • a graph data structure may be defined as a vertex and an edge, and the edge may include a line connecting two different vertices.
  • a graph data structure may include a tree data structure.
  • the tree data structure may be a data structure in which one path connects two different vertices among a plurality of vertices included in the tree. That is, it may be a data structure that does not form a loop in the graph data structure.
  • the data structure may include a neural network.
  • the data structure including the neural network may be stored in a computer-readable medium.
  • Data structures, including neural networks also include preprocessed data for processing by the neural network, data input to the neural network, weights of the neural network, hyperparameters of the neural network, data obtained from the neural network, activation functions associated with each node or layer of the neural network, and the neural network. It may include a loss function for learning of .
  • a data structure comprising a neural network may include any of the components disclosed above.
  • the data structure including the neural network includes preprocessed data for processing by the neural network, data input to the neural network, weights of the neural network, hyperparameters of the neural network, data obtained from the neural network, activation functions associated with each node or layer of the neural network, and the neural network It may be configured to include all or any combination thereof, such as a loss function for learning of .
  • a data structure including a neural network may include any other information that determines a characteristic of the neural network.
  • the data structure may include all types of data used or generated in the operation process of the neural network, and is not limited to the above.
  • Computer-readable media may include computer-readable recording media and/or computer-readable transmission media.
  • a neural network may be composed of a set of interconnected computational units, which may generally be referred to as nodes. These nodes may also be referred to as neurons.
  • a neural network is configured by including at least one or more nodes.
  • the data structure may include data input to the neural network.
  • a data structure including data input to the neural network may be stored in a computer-readable medium.
  • the data input to the neural network may include learning data input in a neural network learning process and/or input data input to the neural network in which learning is completed.
  • Data input to the neural network may include pre-processing data and/or pre-processing target data.
  • the preprocessing may include a data processing process for inputting data into the neural network.
  • the data structure may include data to be pre-processed and data generated by pre-processing.
  • the above-described data structure is merely an example, and the present disclosure is not limited thereto.
  • the data structure may include the weights of the neural network.
  • a weight and a parameter may be used interchangeably.
  • a data structure including a weight of a neural network may be stored in a computer-readable medium.
  • the neural network may include a plurality of weights.
  • the weight may be variable, and may be changed by the user or algorithm in order for the neural network to perform a desired function. For example, when one or more input nodes are interconnected to one output node by respective links, the output node sets values input to input nodes connected to the output node and links corresponding to the respective input nodes. A data value output from the output node may be determined based on the weight.
  • the above-described data structure is merely an example, and the present disclosure is not limited thereto.
  • the weight may include a weight variable in a neural network learning process and/or a weight in which neural network learning is completed.
  • the variable weight in the neural network learning process may include a weight at the start of the learning cycle and/or a variable weight during the learning cycle.
  • the weight for which neural network learning is completed may include a weight for which a learning cycle is completed.
  • the data structure including the weight of the neural network may include a data structure including the weight variable in the neural network learning process and/or the weight in which the neural network learning is completed. Therefore, it is assumed that the above-described weights and/or combinations of weights are included in the data structure including the weights of the neural network.
  • the above-described data structure is merely an example, and the present disclosure is not limited thereto.
  • the data structure including the weights of the neural network may be stored in a computer-readable storage medium (eg, memory, hard disk) after being serialized.
  • Serialization can be the process of converting a data structure into a form that can be reconstructed and used later by storing it on the same or a different computing device.
  • the computing device may serialize the data structure to send and receive data over the network.
  • a data structure including weights of the serialized neural network may be reconstructed in the same computing device or in another computing device through deserialization.
  • the data structure including the weights of the neural network is not limited to serialization.
  • the data structure including the weights of the neural network is a data structure to increase the efficiency of computation while using the resources of the computing device to a minimum (e.g., B-Tree, Trie, m-way search tree, AVL tree, Red-Black Tree).
  • a minimum e.g., B-Tree, Trie, m-way search tree, AVL tree, Red-Black Tree.
  • the data structure may include hyper-parameters of the neural network.
  • the data structure including the hyperparameters of the neural network may be stored in a computer-readable medium.
  • the hyperparameter may be a variable variable by a user. Hyperparameters are, for example, learning rate, cost function, number of iterations of the learning cycle, weight initialization (e.g., setting the range of weight values subject to weight initialization), Hidden Unit The number (eg, the number of hidden layers, the number of nodes of the hidden layer) may be included.
  • the above-described data structure is merely an example, and the present disclosure is not limited thereto.
  • the neural network as described above includes event-related metadata, event name, event image, event image, It may be used to derive a similarity between representative data related to an event including event thumbnails, event voices, and the like, essential properties of the item, and non-essential properties of the item, such as a class or similar representative data group.
  • the similarity between representative data may be based on a similarity measurement method between images using a convolutional neural network (CNN) or a similarity measurement method between words using a natural language processing technique.
  • CNN convolutional neural network
  • FIG. 6 illustrates a list of recommended items according to some embodiments of the present disclosure.
  • the processor 110 may generate a list of recommended items based on the degree of correlation.
  • the recommended item is an item that is recommended to be designated together with the first item, and may be determined based on the correlation 220 included in the correlation data 200 .
  • items E, F, and D were determined as recommended items for item A.
  • the processor 110 may determine the recommended items in the order in which the correlation with the item A is high.
  • the list of recommended items for item A illustrated in FIG. 6 is exemplary, and the number of recommended items and criteria for determining the recommended items are not limited thereto.
  • FIG. 7 is a flowchart illustrating a process in which a processor performs an item recommendation method according to some embodiments of the present disclosure.
  • the processor 110 may generate correlation data between items ( S100 ).
  • the correlation data 200 includes information on the correlation 220 between one or more items.
  • the correlation data 200 may be expressed as two-dimensional or more matrix information.
  • the processor 110 may express the correlation data 200 as a compressed sparse matrix.
  • the processor 110 may store the correlation data 200 using a dictionary key, a list of lists (LIL), a coordinate list (COO), a compressed sparse row (CSR), or the like.
  • LIL list of lists
  • COO coordinate list
  • CSR compressed sparse row
  • the processor 110 may save memory space by compressing and expressing the correlation data 200 .
  • the item 210 may correspond to a designated product. Through this, the memory 120 can be used efficiently in space, and the input/output speed for the correlation data 200 can be very fast.
  • the processor 110 may extract one or more items corresponding to the first item (S200).
  • the first item may be a target on which an event is performed.
  • the event is a trademark application
  • the first item may be any designated product that the applicant intends to apply for.
  • the first item may be directly input by the user through the input device.
  • One or more items corresponding to the first item extracted by the processor 110 may be items determined to be similar to the first item. As described above in FIG. 2 , for example, the degree of similarity of any two items may be determined based on the similarity of their intrinsic/non-essential properties. The processor 110 may determine that the two items are similar when the degree of similarity between any two items satisfies a preset condition.
  • the processor 110 may determine that items belonging to the same class or belonging to the same similar item group are similar. Alternatively, the processor 110 may determine the similar item by using the essential attribute of the first item. For example, with respect to the intrinsic attribute of the first item, it may be determined that the second item having the intrinsic attribute matching a predetermined ratio or more is an item similar to the first item. The processor 110 may determine an item similar to the first item based on a model using a neural network as shown in FIG. 5 or a conventional text similarity analysis algorithm such as Levenshtein Distance. The processor 110 may extract the determined similar items as one or more items corresponding to the first item.
  • the one or more items extracted corresponding to the first item may be a set of items in which a distance between correlation vectors satisfies a preset condition.
  • the correlation vector may be a vector expressing correlation values of an arbitrary item with one or more other items. Referring back to FIG. 2 , the correlation vector for item A may be (0, 1, 4, 5, 14, 7). Also, the correlation vector for the item E may be (14, 6, 22, 1, 0, 4).
  • the processor 110 may determine items having the shortest distance between the first item and the correlation vector as the item corresponding to the first item.
  • the correlation vector may mean a frequency in which one item is related to another item in the same event. Accordingly, when the correlation vectors are similar (ie, when the distance between the correlation vectors is short), the two items can be regarded as similar items. Accordingly, since the correlation vector for similar items can be reflected in the list of recommended items, it is possible to further three-dimensionally protect trademark rights.
  • the processor 110 may extract one or more items corresponding to the first item based on the distance between the correlation vectors.
  • the correlation vector may include correlation values between an arbitrary item and other items.
  • the correlation vector for item A may be (0, 1, 4, 5, 14, 7). Also, the correlation vector for the item E may be (14, 6, 22, 1, 0, 4).
  • the processor 110 may determine items having the shortest distance between the correlation vectors as the items corresponding to the first item.
  • the correlation vector may mean the frequency of designation of one item with another item. Accordingly, when the correlation vectors are similar (ie, when the distance between the correlation vectors is short), the two items can be regarded as similar items. Accordingly, since the correlation vector for similar items can be reflected in the list of recommended items, it is possible to further three-dimensionally protect trademark rights.
  • the processor 110 may generate a list of recommended items for the first item based on the correlation data between the items and the extracted one or more items ( S300 ).
  • the recommended item is an item recommended to appear together with the first item in the event, and may be determined based on the correlation 220 included in the correlation data 200 . As an example, it is assumed that the second item and the third item are extracted as items corresponding to the first item.
  • the processor 110 may recognize correlation information of the first item, the second item, and the third item from the correlation data 200 .
  • the processor 110 may aggregate (eg, add correlation values for the same item) of relevance information of the recognized first to third items.
  • the processor 110 may generate a list of recommended items from the synthesized relevance information in an order of increasing relevance information.
  • the processor 110 may calculate the weight index by using the similarity between the calculated non-essential attributes. For example, the processor 110 may assign a higher weight as the non-essential properties are similar, and conversely, give a lower weight as the non-essential properties are different. Conversely, in order to recommend several items with different non-essential properties, the processor 110 may give a lower weight as the non-essential properties are similar and, conversely, give a higher weight as the non-essential properties are different. As described above, the processor 110 may perform such a series of selections without direct instruction from the user. For example, when the user gives “multi-class selection”, the processor 110 may give a higher weight index based on this input as the non-essential properties are different.
  • the processor 110 automatically recognizes this as “multi-class selection” and weights it as non-essential properties are different. indicators can be given high. Conversely, if the classes of the plurality of first items selected by the user to receive the item recommendation are all the same, the processor 110 automatically recognizes this as “single class selection” and assigns a higher weight index as non-essential properties are similar. You can do it.
  • the processor 110 When generating the recommended item list, the processor 110 includes only items corresponding to the same class or similar item group as the first item in the recommended item list, or only items corresponding to a different class or different similar item group from the first item. can be included in the list of recommended items.
  • FIG. 8 is a flowchart illustrating a process in which a processor performs a method of generating correlation data between items according to some embodiments of the present disclosure.
  • the processor 110 may recognize the class of each of the plurality of items ( S110 ).
  • a class or similar item group may be defined as a type of identification data assigned to items having similar properties.
  • the attribute of an item is used to provide class and similar item group information, the attribute of the used item is not necessarily dependent on the essential attribute of the above-described item. Accordingly, in the present disclosure, non-essential properties of items such as class and similar item group information may be arbitrarily defined.
  • a class and a similar item group may be identification information for a set of one or more items having the same and similar essential properties.
  • a class and a similar item group may be identification information for a set of one or more items having the same and similar arbitrary properties.
  • the class or similar item group may be a set of items in which a similarity between items calculated based on a distance between correlation vectors satisfies a preset condition.
  • the correlation vector may include correlation values between an arbitrary item and other items. Referring back to FIG. 2 , the correlation vector for item A may be (0, 1, 4, 5, 14, 7). Also, the correlation vector for the item E may be (14, 6, 22, 1, 0, 4).
  • the processor 110 may determine items having the shortest distance between the correlation vectors as the items corresponding to the first item.
  • the correlation vector may mean the frequency of designation of one item with another item. Accordingly, when the correlation vectors are similar (ie, when the distance between the correlation vectors is short), the two items can be regarded as similar items. Accordingly, since the correlation vector for similar items can be reflected in the list of recommended items, it is possible to further three-dimensionally protect trademark rights.
  • the processor 110 may give weights when increasing the correlation between items having different classes ( S120 ).
  • Non-essential attributes such as class and similar item group information for an item may be stored in the memory 120 and read by the processor 110 . Accordingly, when calculating the degree of correlation between the plurality of items, the processor 110 recognizes each of the plurality of items, and reads, if necessary, at least one or more class and similar item group information for each of the items from the memory 120 . can be recognised.
  • non-essential attributes such as class and similar item group information for an item may be stored in the memory 120 and read by the processor 110 . Accordingly, when calculating the degree of correlation between the plurality of items, the processor 110 recognizes each of the plurality of items, and reads, if necessary, at least one or more class and similar item group information for each of the items from the memory 120 . can be recognised.
  • the processor 110 may determine whether the classes of the first item and the second item are the same, and when the classes are different, the processor 110 may assign a weight index to increase the correlation between the first item and the second item.
  • the processor 110 may determine the weight index by using whether the non-essential properties are the same, the similarity between the non-essential properties, and the like. Specifically, the processor 110 may recognize a measure of identity and similarity between non-essential attributes (ie, degree of similarity). The similarity between these non-essential attributes may be pre-stored in the memory 120 and read by the processor 110 . Alternatively, when the class or similar item group information is text information, the processor 110 performs a conventional calculation method of similarity between strings, such as Levenshtein Distance, Hamming Distance, Smith-Waterman, and S ⁇ rensen-Dice Coefficient, or a neural for text processing. The similarity between non-essential attributes can be calculated using a network-based word similarity technique. Alternatively, the processor 110 may calculate the similarity between information of a class or similar item group in the form of text information based on a neural network that calculates a similarity between meanings implied by non-essential properties of items.
  • the processor 110 may calculate the weight index by using the similarity between the calculated non-essential attributes. For example, the processor 110 may assign a higher weight as the non-essential properties are similar, and conversely, give a lower weight as the non-essential properties are different. Conversely, in order to recommend several items with different non-essential properties, the processor 110 may give a lower weight as the non-essential properties are similar and, conversely, give a higher weight as the non-essential properties are different. As described above, the processor 110 may perform such a series of selections without direct instruction from the user. For example, when the user gives “multi-class selection”, the processor 110 may give a higher weight index based on this input as the non-essential properties are different.
  • the processor 110 automatically recognizes this as “multi-class selection” and weights it as non-essential properties are different. indicators can be given high. Conversely, if the classes of the plurality of first items selected by the user to receive the item recommendation are all the same, the processor 110 automatically recognizes this as “single class selection” and assigns a higher weight index as non-essential properties are similar. You can do it.
  • the processor 110 may calculate correlation and correlation data so as to generate an item recommendation list suitable for a user's needs.
  • FIG. 9 is a flowchart illustrating a process in which a processor generates correlation data between items according to some embodiments of the present disclosure.
  • the processor 110 may generate a representative data group including one or more similar representative data ( S130 ).
  • representative data may include an event name, image, video, thumbnail, voice, etc. representing the event.
  • the representative data may be multimedia data in which the event name, image, video, thumbnail, voice, and the like are combined. Since the above description is merely an example of the type of representative data, the type of representative data is not limited thereto.
  • the processor 110 may generate a representative data group.
  • the processor 110 may determine the degree of similarity between the representative data to generate the representative data group.
  • the processor 110 may determine the similarity between the representative data based on a neural network model that detects the similarity of images or words.
  • the processor 110 may include representative data in one representative data group. Since the above description is only an example of a method for forming the representative data group, the method for generating the representative data group is not limited thereto.
  • the representative data 400 may be a mark in the form of an image.
  • the representative data 400 may be a brand name indicated by 'CHANEL'.
  • the representative data may include both a mark and a name in the form of the illustrated image.
  • the representative data may include only a core part of the data (hereinafter, referred to as 'subject').
  • 'subject' a core part of the data
  • the processor 110 may recognize only the 'CHANEL' portion excluding the appended figures as the representative data 400 rather than recognizing the entire image of the corresponding mark as the representative data 400 .
  • the processor 110 may generate a representative data group and increase correlation between items corresponding to the representative data group.
  • the representative data group may mean a set of similar representative data. Referring to FIG. 3 , the illustrated 'CHANEL' and similar marks may be grouped into one representative data group.
  • the processor 110 may determine the similarity between the representative data to generate the representative data group. For example, the processor 110 may determine the similarity between the representative data based on a neural network model that detects the similarity of images or words. When the determined similarity satisfies a preset criterion, the processor 110 may include representative data in one representative data group.
  • the processor 110 may increase the correlation between the plurality of items corresponding to the representative data group ( S140 ).
  • the processor 110 may recognize items designated in each of the trademark applications included in the representative data group, and increase a correlation index between the items.
  • the processor 110 may recognize the trademark applications included in the representative data group as one trademark application, and increase the relevance index by 1 for all combinations of each of the items.
  • the processor 110 may overlap the relevance index by reflecting the degree of similarity between the calculated representative data for each trademark application, regardless of whether the representative data group is generated. For example, when the similarity between the representative data of the first application and the representative data of the second application is 0.6, the processor 110 sets the relevance index by 0.6 in a combination between the items to which one or more items included in the second application are related. can increase
  • the correlation data 200 that meets the needs of the applicant can be generated according to which metadata is used to increase the correlation in the item recommendation method according to the present disclosure.
  • correlation data 200 suitable for overlapping protection of trademarks may be generated by increasing the correlation for each application.
  • the processor 110 may determine to reflect the similarity in increasing the relevance index. Conversely, when the applicant desires broad protection for trademark rights, the processor 110 may determine to consider all applications included in the representative data group as a single application.
  • FIG. 10 is a flowchart illustrating a process in which a processor generates a representative data group according to some embodiments of the present disclosure.
  • the processor 110 may calculate a degree of similarity for each of one or more representative data included in the representative data group ( S131 ).
  • the processor 110 may generate a representative data group and increase correlation between items corresponding to the representative data group.
  • the representative data group may mean a set of similar representative data. Referring to FIG. 3 , the illustrated 'CHANEL' and similar marks may be grouped into one representative data group.
  • the processor 110 may determine the similarity between the representative data to generate the representative data group. For example, the processor 110 may determine the similarity between the representative data based on a neural network model that detects the similarity of images or words. When the determined similarity satisfies a preset criterion, the processor 110 may include representative data in one representative data group.
  • the processor 110 may recognize items designated in each of the trademark applications included in the representative data group, and increase a correlation index between the items. In this case, the processor 110 may recognize the trademark applications included in the representative data group as one trademark application, and increase the relevance index by 1 for all combinations of each of the items.
  • the processor 110 may assign a weight corresponding to each of the one or more representative data based on the degree of similarity ( S132 ).
  • the processor 110 may generate a representative data group and increase correlation between items corresponding to the representative data group.
  • the representative data group may mean a set of similar representative data. Referring to FIG. 3 , the illustrated 'CHANEL' and similar marks may be grouped into one representative data group.
  • the processor 110 may determine the similarity between the representative data to generate the representative data group. For example, the processor 110 may determine the similarity between the representative data based on a neural network model that detects the similarity of images or words. When the determined similarity satisfies a preset criterion, the processor 110 may include representative data in one representative data group.
  • the processor 110 may recognize items designated in each of the trademark applications included in the representative data group, and increase a correlation index between the items. In this case, the processor 110 may recognize the trademark applications included in the representative data group as one trademark application, and increase the relevance index by 1 for all combinations of each of the items.
  • the processor 110 may overlap the relevance index by reflecting the degree of similarity between the calculated representative data for each trademark application, regardless of whether the representative data group is generated. For example, when the similarity between the representative data of the first application and the representative data of the second application is 0.6, the processor 110 sets the relevance index by 0.6 in a combination between the items to which one or more items included in the second application are related. can increase
  • the reciprocal number of the degree of similarity between the representative data may be reflected in the degree of association index. That is, when the similarity between the representative data is 0.2, when the relevance index is increased, 5, which is the reciprocal of the similarity, is multiplied by the reciprocity index increase value 1 to increase 5. In this case, the amount of increase in the correlation between the trademark applications with low similarity between the representative data will be larger.
  • the correlation data 200 that meets the needs of the applicant can be generated according to which metadata is used to increase the correlation in the item recommendation method according to the present disclosure.
  • correlation data 200 suitable for overlapping protection of trademarks may be generated by increasing the correlation for each application.
  • the processor 110 may determine to reflect the similarity in increasing the relevance index. Conversely, when the applicant desires broad protection for trademark rights, the processor 110 may determine to consider all applications included in the representative data group as a single application.
  • FIG. 11 is a flowchart illustrating a process in which a processor performs a method of generating correlation data between items according to some embodiments of the present disclosure.
  • the processor 110 may determine one or more items having similar properties to an item to increase the correlation among a plurality of items included in the item database ( S150 ).
  • the processor 110 may increase the correlation with respect to one or more items having similar attributes ( S160 ).
  • the item database may be a database in which information about an item is stored.
  • the item database may include information about a name of each item, essential properties and non-essential properties of each item.
  • the processor 110 may process as follows.
  • the processor 110 may recognize items having similar properties to item B from the item database. Thereafter, the processor 110 may increase the correlation between the item A and the item B and items having similar properties to the recognized item B.
  • the degree of similarity of any two items may be determined based on the similarity of their essential/non-essential attributes.
  • the processor 110 may determine that the two items are similar when the degree of similarity between any two items satisfies a preset condition.
  • the processor 110 may determine that items belonging to the same class or belonging to the same similar item group are similar. Alternatively, the processor 110 may determine the similar item by using the essential attribute of the first item. For example, with respect to the intrinsic attribute of the first item, it may be determined that the second item having the intrinsic attribute matching a predetermined ratio or more is an item similar to the first item. The processor 110 may determine an item similar to the first item based on a model using a neural network as shown in FIG. 5 or a conventional text similarity analysis algorithm such as Levenshtein Distance. The processor 110 may extract the determined similar items as one or more items corresponding to the first item.
  • the one or more items extracted corresponding to the first item may be a set of items in which a distance between correlation vectors satisfies a preset condition.
  • the correlation vector may be a vector expressing correlation values of an arbitrary item with one or more other items. Referring back to FIG. 2 , the correlation vector for item A may be (0, 1, 4, 5, 14, 7). Also, the correlation vector for the item E may be (14, 6, 22, 1, 0, 4).
  • the processor 110 may determine items having the shortest distance between the first item and the correlation vector as the item corresponding to the first item.
  • the correlation vector may mean a frequency in which one item is related to another item in the same event. Accordingly, when the correlation vectors are similar (ie, when the distance between the correlation vectors is short), the two items can be regarded as similar items. Accordingly, since the correlation vector for similar items can be reflected in the list of recommended items, it is possible to further three-dimensionally protect trademark rights.
  • the processor 110 may extract one or more items corresponding to the first item based on the distance between the correlation vectors.
  • the correlation vector may include correlation values between an arbitrary item and other items.
  • the correlation vector for item A may be (0, 1, 4, 5, 14, 7). Also, the correlation vector for the item E may be (14, 6, 22, 1, 0, 4).
  • the processor 110 may determine items having the shortest distance between the correlation vectors as the items corresponding to the first item.
  • the correlation vector may mean the frequency of designation of one item with another item. Accordingly, when the correlation vectors are similar (ie, when the distance between the correlation vectors is short), the two items can be regarded as similar items. Accordingly, since the correlation vector for similar items can be reflected in the list of recommended items, it is possible to further three-dimensionally protect trademark rights.
  • the processor 110 may recognize one item most similar to item B.
  • FIG. 12 is a flowchart illustrating a process in which a processor determines one or more items having similar attributes to an item to increase a correlation, according to some embodiments of the present disclosure.
  • the processor 110 may calculate a similarity between the item to be increased in correlation and each of one or more items having similar properties ( S151 ).
  • the degree of similarity of any two items may be determined based on the similarity of their essential/non-essential attributes.
  • the processor 110 may determine that two items are similar when the degree of similarity between any two items satisfies a preset condition.
  • the processor 110 may determine that items belonging to the same class or belonging to the same similar item group are similar. Alternatively, the processor 110 may determine the similar item by using the essential attribute of the first item. For example, with respect to the intrinsic attribute of the first item, it may be determined that the second item having the intrinsic attribute matching a predetermined ratio or more is an item similar to the first item. The processor 110 may determine an item similar to the first item based on a model using a neural network as shown in FIG. 5 or a conventional text similarity analysis algorithm such as Levenshtein Distance. The processor 110 may extract the determined similar items as one or more items corresponding to the first item.
  • the one or more items extracted corresponding to the first item may be a set of items in which a distance between correlation vectors satisfies a preset condition.
  • the correlation vector may be a vector expressing correlation values of an arbitrary item with one or more other items. Referring back to FIG. 2 , the correlation vector for item A may be (0, 1, 4, 5, 14, 7). Also, the correlation vector for the item E may be (14, 6, 22, 1, 0, 4).
  • the processor 110 may determine items having the shortest distance between the first item and the correlation vector as the item corresponding to the first item.
  • the correlation vector may mean a frequency in which one item is related to another item in the same event. Accordingly, when the correlation vectors are similar (ie, when the distance between the correlation vectors is short), the two items can be regarded as similar items. Accordingly, since the correlation vector for similar items can be reflected in the list of recommended items, it is possible to further three-dimensionally protect trademark rights.
  • the processor 110 may extract one or more items corresponding to the first item based on the distance between the correlation vectors.
  • the correlation vector may include correlation values between an arbitrary item and other items.
  • the correlation vector for item A may be (0, 1, 4, 5, 14, 7). Also, the correlation vector for the item E may be (14, 6, 22, 1, 0, 4).
  • the processor 110 may determine items having the shortest distance between the correlation vectors as the items corresponding to the first item.
  • the correlation vector may mean the frequency of designation of one item with another item. Accordingly, when the correlation vectors are similar (ie, when the distance between the correlation vectors is short), the two items can be regarded as similar items. Accordingly, since the correlation vector for similar items can be reflected in the list of recommended items, it is possible to further three-dimensionally protect trademark rights.
  • the processor 110 may assign a weight to each of one or more items having similar attributes based on the degree of similarity ( S152 ).
  • the processor 110 may generate a weight by using the similarity. For example, the processor 110 may determine the similarity as a weight as it is. Alternatively, the processor 110 may determine the reciprocal of the similarity as a weight.
  • the processor 110 may determine the similarity as a weight as it is. Conversely, when the applicant wants broad protection for trademark rights, the processor 110 may determine the reciprocal of the similarity as a weight.
  • FIG. 13 is a flowchart illustrating a process in which a processor generates a list of recommended items according to some embodiments of the present disclosure.
  • the processor 110 may recognize correlation data related to the first item ( S310 ).
  • the processor 110 may recognize the class of the first item and the class of each of the one or more items having a correlation between the class and the first item ( S320 ).
  • the processor 110 may assign weights to one or more items having different classes from the first item ( S330 ).
  • FIG. 14 is a flowchart illustrating a process in which a processor generates a list of recommended items according to some embodiments of the present disclosure.
  • the processor 110 may recognize correlation data related to the first item (S340).
  • the processor 110 may recognize the class of the first item and the class of each of the one or more items having a correlation between the class and the first item ( S350 ).
  • the processor 110 may generate a list of recommended items based on a list of one or more items having the same class as the class of the first item ( S360 ).
  • 15 is a flowchart illustrating a process in which a processor generates a list of recommended items according to some embodiments of the present disclosure.
  • the processor 110 may recognize correlation data related to the first item (S370).
  • the processor 110 may recognize the class of the first item and the class of each of the one or more items having a correlation between the class and the first item ( S380 ).
  • the processor 110 may generate a list of recommended items based on a list of one or more items having a class different from the class of the first item ( S390 ).
  • 16 is a simplified, general schematic diagram of an exemplary computing environment in which embodiments of the present disclosure may be implemented.
  • program modules include routines, programs, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • methods of the present disclosure can be applied to single-processor or multiprocessor computer systems, minicomputers, mainframe computers as well as personal computers, handheld computing devices, microprocessor-based or programmable consumer electronics, and the like. It will be appreciated that each of these may be implemented in other computer system configurations, including operable in conjunction with one or more associated devices.
  • the described embodiments of the present disclosure may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote memory storage devices.
  • Computers typically include a variety of computer-readable media. Any medium accessible by a computer can be a computer-readable medium, and such computer-readable media includes volatile and nonvolatile media, transitory and non-transitory media, removable and non-transitory media. including removable media.
  • computer-readable media may include computer-readable storage media and computer-readable transmission media.
  • Computer-readable storage media includes volatile and non-volatile media, transitory and non-transitory media, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. includes media.
  • a computer-readable storage medium may be RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital video disk (DVD) or other optical disk storage device, magnetic cassette, magnetic tape, magnetic disk storage device, or other magnetic storage device. device, or any other medium that can be accessed by a computer and used to store the desired information.
  • Computer readable transmission media typically embodies computer readable instructions, data structures, program modules or other data, etc. in a modulated data signal such as a carrier wave or other transport mechanism, and Includes any information delivery medium.
  • modulated data signal means a signal in which one or more of the characteristics of the signal is set or changed so as to encode information in the signal.
  • computer-readable transmission media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above are also intended to be included within the scope of computer-readable transmission media.
  • An example environment 1100 implementing various aspects of the present disclosure is shown including a computer 1102 , the computer 1102 including a processing unit 1104 , a system memory 1106 , and a system bus 1108 . do.
  • the system bus 1108 couples system components, including but not limited to system memory 1106 , to the processing device 1104 .
  • the processing device 1104 may be any of a variety of commercially available processors. Dual processor and other multiprocessor architectures may also be used as processing unit 1104 .
  • the system bus 1108 may be any of several types of bus structures that may be further interconnected to a memory bus, a peripheral bus, and a local bus using any of a variety of commercial bus architectures.
  • System memory 1106 includes read only memory (ROM) 1110 and random access memory (RAM) 1112 .
  • ROM read only memory
  • RAM random access memory
  • a basic input/output system (BIOS) is stored in non-volatile memory 1110, such as ROM, EPROM, EEPROM, etc., the BIOS is the basic input/output system (BIOS) that helps transfer information between components within computer 1102, such as during startup. contains routines.
  • BIOS basic input/output system
  • RAM 1112 may also include high-speed RAM, such as static RAM, for caching data.
  • the computer 1102 may also be configured with an internal hard disk drive (HDD) 1114 (eg, EIDE, SATA) - this internal hard disk drive 1114 may also be configured for external use within a suitable chassis (not shown).
  • HDD hard disk drive
  • FDD magnetic floppy disk drive
  • optical disk drive 1120 eg, CD-ROM
  • the hard disk drive 1114 , the magnetic disk drive 1116 , and the optical disk drive 1120 are connected to the system bus 1108 by the hard disk drive interface 1124 , the magnetic disk drive interface 1126 , and the optical drive interface 1128 , respectively.
  • the interface 1124 for implementing an external drive includes at least one or both of Universal Serial Bus (USB) and IEEE 1394 interface technologies.
  • drives and their associated computer-readable media provide non-volatile storage of data, data structures, computer-executable instructions, and the like.
  • drives and media correspond to storing any data in a suitable digital format.
  • computer readable media refers to HDDs, removable magnetic disks, and removable optical media such as CDs or DVDs, those skilled in the art will use zip drives, magnetic cassettes, flash memory cards, cartridges, etc. It will be appreciated that other tangible computer-readable media such as etc. may also be used in the exemplary operating environment and any such media may include computer-executable instructions for performing the methods of the present disclosure.
  • a number of program modules may be stored in the drive and RAM 1112 , including an operating system 1130 , one or more application programs 1132 , other program modules 1134 , and program data 1136 . All or portions of the operating system, applications, modules, and/or data may also be cached in RAM 1112 . It will be appreciated that the present disclosure may be implemented in various commercially available operating systems or combinations of operating systems.
  • a user may enter commands and information into the computer 1102 via one or more wired/wireless input devices, for example, a pointing device such as a keyboard 1138 and a mouse 1140 .
  • Other input devices may include a microphone, IR remote control, joystick, game pad, stylus pen, touch screen, and the like.
  • these and other input devices are often connected to the processing unit 1104 through an input device interface 1142 that is connected to the system bus 1108, parallel ports, IEEE 1394 serial ports, game ports, USB ports, IR interfaces, It may be connected by other interfaces, etc.
  • a monitor 1144 or other type of display device is also coupled to the system bus 1108 via an interface, such as a video adapter 1146 .
  • the computer typically includes other peripheral output devices (not shown), such as speakers, printers, and the like.
  • Computer 1102 may operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) 1148 via wired and/or wireless communications.
  • Remote computer(s) 1148 may be workstations, computing device computers, routers, personal computers, portable computers, microprocessor-based entertainment devices, peer devices, or other common network nodes, and are typically connected to computer 1102 . Although it includes many or all of the components described for it, only memory storage device 1150 is shown for simplicity.
  • the logical connections shown include wired/wireless connections to a local area network (LAN) 1152 and/or a larger network, eg, a wide area network (WAN) 1154 .
  • LAN and WAN networking environments are common in offices and companies, and facilitate enterprise-wide computer networks, such as intranets, all of which can be connected to a worldwide computer network, for example, the Internet.
  • the computer 1102 When used in a LAN networking environment, the computer 1102 is connected to the local network 1152 through a wired and/or wireless communication network interface or adapter 1156 .
  • Adapter 1156 may facilitate wired or wireless communication to LAN 1152 , which also includes a wireless access point installed therein for communicating with wireless adapter 1156 .
  • the computer 1102 When used in a WAN networking environment, the computer 1102 may include a modem 1158, be connected to a communication computing device on the WAN 1154, or establish communications over the WAN 1154, such as over the Internet. have other means.
  • a modem 1158 which may be internal or external and a wired or wireless device, is coupled to the system bus 1108 via a serial port interface 1142 .
  • program modules described for computer 1102 may be stored in remote memory/storage device 1150 . It will be appreciated that the network connections shown are exemplary and other means of establishing a communication link between the computers may be used.
  • Computer 1102 may be associated with any wireless device or object that is deployed and operates in wireless communication, for example, printers, scanners, desktop and/or portable computers, portable data assistants (PDAs), communications satellites, wireless detectable tags. It operates to communicate with any device or place, and phone. This includes at least Wi-Fi and Bluetooth wireless technologies. Accordingly, the communication may be a predefined structure as in a conventional network or may simply be an ad hoc communication between at least two devices.
  • PDAs portable data assistants
  • communications satellites for example, printers, scanners, desktop and/or portable computers, portable data assistants (PDAs), communications satellites, wireless detectable tags. It operates to communicate with any device or place, and phone. This includes at least Wi-Fi and Bluetooth wireless technologies. Accordingly, the communication may be a predefined structure as in a conventional network or may simply be an ad hoc communication between at least two devices.
  • Wi-Fi Wireless Fidelity
  • Wi-Fi is a wireless technology such as cell phones that allows these devices, eg, computers, to transmit and receive data indoors and outdoors, ie anywhere within the coverage area of a base station.
  • Wi-Fi networks use a radio technology called IEEE 802.11 (a, b, g, etc.) to provide secure, reliable, and high-speed wireless connections.
  • Wi-Fi can be used to connect computers to each other, to the Internet, and to wired networks (using IEEE 802.3 or Ethernet).
  • Wi-Fi networks may operate in unlicensed 2.4 and 5 GHz radio bands, for example, at 11 Mbps (802.11a) or 54 Mbps (802.11b) data rates, or in products that include both bands (dual band). .
  • the various embodiments presented herein may be implemented as methods, apparatus, or articles of manufacture using standard programming and/or engineering techniques.
  • article of manufacture includes a computer program, carrier, or media accessible from any computer-readable storage device.
  • computer-readable storage media include magnetic storage devices (eg, hard disks, floppy disks, magnetic strips, etc.), optical disks (eg, CDs, DVDs, etc.), smart cards, and flash drives. memory devices (eg, EEPROMs, cards, sticks, key drives, etc.).
  • various storage media presented herein include one or more devices and/or other machine-readable media for storing information.
  • the present invention relates to a method of recommending an item using a computing device, and more particularly, to a method of providing a list of recommended items by processing the frequency of designation of accompanying items.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • General Physics & Mathematics (AREA)
  • Technology Law (AREA)
  • Human Resources & Organizations (AREA)
  • General Engineering & Computer Science (AREA)
  • General Business, Economics & Management (AREA)
  • Artificial Intelligence (AREA)
  • Strategic Management (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Economics (AREA)
  • Evolutionary Computation (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Operations Research (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Un mode de réalisation de la présente invention concerne un programme d'ordinateur stocké dans un support d'enregistrement lisible par ordinateur. Le programme d'ordinateur peut comprendre des instructions pour amener un ou plusieurs processeurs à réaliser les étapes suivantes comprenant : une étape consistant à générer des données de corrélation entre des articles ; une étape consistant à extraire un ou plusieurs articles correspondant à un premier article ; et une étape consistant à générer une liste d'articles recommandés pour le premier article sur la base des données de corrélation entre les articles et le ou les articles extraits.
PCT/KR2021/001532 2020-05-13 2021-02-05 Procédé de recommandation d'articles WO2021230469A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020200056905A KR20210138893A (ko) 2020-05-13 2020-05-13 아이템 추천 방법
KR10-2020-0056905 2020-05-13

Publications (1)

Publication Number Publication Date
WO2021230469A1 true WO2021230469A1 (fr) 2021-11-18

Family

ID=78524782

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/001532 WO2021230469A1 (fr) 2020-05-13 2021-02-05 Procédé de recommandation d'articles

Country Status (2)

Country Link
KR (1) KR20210138893A (fr)
WO (1) WO2021230469A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114840777A (zh) * 2022-07-04 2022-08-02 杭州城市大脑有限公司 多维度养老服务推荐方法、装置以及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080088026A (ko) * 2007-03-28 2008-10-02 전대휘 지능형 상표 검색 서비스 제공 시스템 및 방법
JP2014006822A (ja) * 2012-06-27 2014-01-16 Jvc Kenwood Corp 情報選択装置、情報選択方法、端末装置およびコンピュータプログラム
KR101562279B1 (ko) * 2013-09-16 2015-10-30 고려대학교 산학협력단 사용자 의도 추론에 기반한 휴대용 단말 장치 및 이를 이용한 컨텐츠 추천 방법
KR20190030435A (ko) * 2017-09-14 2019-03-22 주식회사 세진마인드 자연어 처리를 이용한 지정상품 추천 방법, 장치 및 컴퓨터 판독가능 저장 매체에 저장된 컴퓨터 프로그램
KR20190031421A (ko) * 2017-09-16 2019-03-26 조영록 지정 항목 추천 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080088026A (ko) * 2007-03-28 2008-10-02 전대휘 지능형 상표 검색 서비스 제공 시스템 및 방법
JP2014006822A (ja) * 2012-06-27 2014-01-16 Jvc Kenwood Corp 情報選択装置、情報選択方法、端末装置およびコンピュータプログラム
KR101562279B1 (ko) * 2013-09-16 2015-10-30 고려대학교 산학협력단 사용자 의도 추론에 기반한 휴대용 단말 장치 및 이를 이용한 컨텐츠 추천 방법
KR20190030435A (ko) * 2017-09-14 2019-03-22 주식회사 세진마인드 자연어 처리를 이용한 지정상품 추천 방법, 장치 및 컴퓨터 판독가능 저장 매체에 저장된 컴퓨터 프로그램
KR20190031421A (ko) * 2017-09-16 2019-03-26 조영록 지정 항목 추천 방법

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114840777A (zh) * 2022-07-04 2022-08-02 杭州城市大脑有限公司 多维度养老服务推荐方法、装置以及电子设备
CN114840777B (zh) * 2022-07-04 2022-09-27 杭州城市大脑有限公司 多维度养老服务推荐方法、装置以及电子设备

Also Published As

Publication number Publication date
KR20210138893A (ko) 2021-11-22

Similar Documents

Publication Publication Date Title
WO2020159232A1 (fr) Procédé, appareil, dispositif électronique et support d'informations lisible par ordinateur permettant de rechercher une image
WO2020190112A1 (fr) Procédé, appareil, dispositif et support permettant de générer des informations de sous-titrage de données multimédias
WO2020091210A1 (fr) Système et procédé d'intégration de bases de données d'après un graphe de connaissances
WO2020138928A1 (fr) Procédé de traitement d'informations, appareil, dispositif électrique et support d'informations lisible par ordinateur
WO2018174603A1 (fr) Procédé et dispositif d'affichage d'explication de numéro de référence dans une image de dessin de brevet à l'aide d'apprentissage automatique à base de technologie d'intelligence artificielle
WO2020214011A1 (fr) Procédé et appareil de traitement d'informations, dispositif électronique et support de stockage lisible par ordinateur
WO2022005188A1 (fr) Procédé de reconnaissance d'entité, appareil, dispositif électronique et support de stockage lisible par ordinateur
WO2018034426A1 (fr) Procédé de correction automatique d'erreurs dans un corpus balisé à l'aide de règles pdr de noyau
WO2023153818A1 (fr) Procédé de fourniture d'un modèle de réseau neuronal et appareil électronique pour sa mise en œuvre
WO2020036297A1 (fr) Appareil électronique et procédé de commande associé
WO2022102937A1 (fr) Procédés et systèmes pour prédire des actions qui ne sont pas par défaut à l'égard d'énoncés non structurés
WO2021162481A1 (fr) Dispositif électronique et son procédé de commande
WO2021230469A1 (fr) Procédé de recommandation d'articles
WO2023172025A1 (fr) Procédé de prédiction d'informations relatives à une association entre une paire d'entités à l'aide d'un modèle de codage d'informations de série chronologique, et système de prédiction généré à l'aide de celui-ci
WO2020091253A1 (fr) Dispositif électronique et procédé de commande d'un dispositif électronique
WO2022255632A1 (fr) Dispositif et procédé de réseau de neurones artificiels de création de conception automatique, faisant appel à des bits ux
WO2023080276A1 (fr) Système d'apprentissage profond distribué à liaison de base de données basé sur des interrogations, et procédé associé
WO2021107360A2 (fr) Dispositif électronique de détermination d'un degré de similarité et son procédé de commande
EP3523932A1 (fr) Procédé et appareil pour filtrer une pluralité de messages
WO2011068315A4 (fr) Appareil permettant de sélectionner une base de données optimale en utilisant une technique de reconnaissance de force conceptuelle maximale et procédé associé
WO2023048537A1 (fr) Serveur et procédé pour fournir un contenu de recommandation
WO2021194105A1 (fr) Procédé d'apprentissage de modèle de simulation d'expert, et dispositif d'apprentissage
WO2023163405A1 (fr) Procédé et appareil de mise à jour ou de remplacement de modèle d'évaluation de crédit
WO2023055047A1 (fr) Procédé d'entraînement de modèle de prédiction, procédé de prédiction d'informations et dispositif correspondant
WO2023128221A1 (fr) Procédé de détection de fraude basé sur un apprentissage profond

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21804935

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21804935

Country of ref document: EP

Kind code of ref document: A1