WO2023167496A1 - Procédé de composition de musique utilisant l'intelligence artificielle - Google Patents

Procédé de composition de musique utilisant l'intelligence artificielle Download PDF

Info

Publication number
WO2023167496A1
WO2023167496A1 PCT/KR2023/002837 KR2023002837W WO2023167496A1 WO 2023167496 A1 WO2023167496 A1 WO 2023167496A1 KR 2023002837 W KR2023002837 W KR 2023002837W WO 2023167496 A1 WO2023167496 A1 WO 2023167496A1
Authority
WO
WIPO (PCT)
Prior art keywords
situation
user
composition
music
expression
Prior art date
Application number
PCT/KR2023/002837
Other languages
English (en)
Korean (ko)
Inventor
손다영
Original Assignee
손다영
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 손다영 filed Critical 손다영
Publication of WO2023167496A1 publication Critical patent/WO2023167496A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B15/00Teaching music
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments

Definitions

  • the present invention relates to a method for composing music through artificial intelligence, and more particularly, to a technology for supporting music composition through artificial intelligence so that a user's emotional state and everyday life can be expressed musically.
  • Music can be the most effective way to express people's emotions or current situations. In particular, it is possible to express one's feelings or situations through writing, but there are limitations in expressing subtle feelings or situations with vocabulary, so it is common that the feelings or situations are not accurately conveyed to the reader so that they can empathize. On the other hand, musical expression can reflect various and fine elements that cannot be expressed lexically, so the degree of empathy of the listener can be further improved.
  • the present invention has been made to solve the problems of the prior art as described above, and is intended to provide a method of supporting music composition so that a user's emotional state and daily life can be expressed musically.
  • An embodiment of a method for composing music using artificial intelligence includes a basic element extraction step of extracting a base rhythm according to a user's expression situation through artificial intelligence; Auxiliary element extraction step of extracting any one or more of a plurality of melodies or a plurality of harmonies corresponding to the expression situation through the artificial intelligence; an auxiliary element selection step in which one or more of a plurality of melodies or a plurality of harmonies extracted from the user is selected; and a composition generating step of generating a music composition by combining one or more melodies or harmonies selected based on the bass rhythm extracted through the artificial intelligence.
  • the basic element extraction step may include receiving text or humming from a user and determining the user's expression situation based on this; and extracting a bass rhythm corresponding to the user's expression situation from a database holding bass rhythms for each expression situation, wherein the extracting of auxiliary elements comprises a plurality of melodies or extracting a plurality of harmonies; and determining a plurality of melodies or a plurality of harmonies to be recommended based on the extracted bass rhythm.
  • the generating of the composition may include combining one or more melodies or one or more harmonies selected with the bass rhythm for each measure; and combining and arranging a plurality of measures to create a musical composition.
  • it may further include a context expression information generation step of providing a list of one or more instruments suitable for the musical composition to a user, selecting one or more instruments, and generating context expression information for playing the music composition with the selected instrument.
  • a context expression information generation step of providing a list of one or more instruments suitable for the musical composition to a user, selecting one or more instruments, and generating context expression information for playing the music composition with the selected instrument.
  • an emotion graph or an emotion image corresponding to the music composition may be generated and situation expression information including the emotional graph or emotion image may be generated.
  • the method may further include a step of storing the situation expression information by organizing the situation expression information by date corresponding to the user's expression situation and accumulatively storing the situation expression information or linking the situation expression information to the user's social networking service (SNS).
  • SNS social networking service
  • music can overcome the limitation that the feelings or situations cannot be conveyed in a sympathetic way to the reader.
  • composition results can be generated to suit the emotion or situation at the time, so that the user's emotion or expression of the situation can be faithfully reflected.
  • the user can feel the emotions felt at that time more effectively by replaying the musical results made by the user later.
  • FIG. 1 shows one embodiment of a music composition system according to the present invention.
  • FIG. 2 shows a configuration diagram of an embodiment of a composition support service device of a music composition system according to the present invention.
  • FIG. 3 shows a block diagram of an embodiment of a database of a music composition system according to the present invention.
  • Figure 4 shows a flow diagram of one embodiment of a music composition method according to the present invention.
  • Figure 5 shows a flow diagram of another embodiment of a music composition method according to the present invention.
  • FIG. 6 shows a flow diagram of another embodiment of a music composition method according to the present invention.
  • the present invention proposes a technology for supporting music composition through artificial intelligence so that a user's emotional state and daily life can be expressed musically.
  • FIG. 1 shows one embodiment of a music composition system according to the present invention.
  • a music composing system may include a user terminal 10 , a composition support service device 100 , a database 200 , and the like.
  • the user terminal 10 may connect to the composition support service device 100 and provide information about the user's emotional state and daily life to the composition support service device 100 and receive a music sub-grain based thereon.
  • various communication terminals that can access the composition support service device 100 and allow the user to input their own expression state may be applied.
  • a generally popular terminal such as a smart phone 10a, a laptop computer 10b, and a PC 10c may be applied.
  • the user may transmit various situations, such as his/her emotional state and daily life, to the composition support service device 100 through the user terminal 10 .
  • the user may input text such as a word or sentence corresponding to a situation to be expressed or humming.
  • you can also input an image such as a photo or video containing your situation.
  • the user may select or process various pieces of information provided from the composition support service device 100 through the user terminal 10 .
  • the composition support service device 100 receives various situational information such as the user's emotional state and daily life through the user terminal 10 and analyzes it through artificial intelligence to obtain musical basic elements and musical auxiliary elements suitable for the user's expression situation. can be extracted and combined or processed to create a musical composition.
  • the musical elements can include rhythm, melody, and harmony.
  • Rhythm the flow of beats
  • the melody, melody, and harmony added to it are auxiliary elements.
  • the composition support service device 100 may extract, process, and combine these musical elements to generate a music composition corresponding to the user's expression situation.
  • the composition support service device 100 may obtain various musical element information about rhythm, melody, harmony, and the like through the Internet network 30 .
  • the composition support service device 100 collects various music information and interpretation information present on the Internet network 30 and provides rhythm, melody, harmony, etc. through artificial intelligence learning. It is possible to distinguish which situation expression can correspond to.
  • the composition support service device 100 may classify music element information about rhythm, melody, harmony, and the like for each situation expression and store it in the database 200 .
  • composition support service device 100 may hold various instrument information and generate situational expression information in which the created music composition is played with the instrument.
  • composition support service device 100 may generate situation expression information by generating an emotion graph or an emotion image corresponding to a music composition and matching them.
  • the composition support service device 100 may generate an emotion graph that changes according to the performance of a music composition or an image corresponding to an emotional state, or a photograph input by a user corresponding to a performance state of a music composition.
  • Situational expression information may be generated by matching images or videos.
  • composition support service device 100 may organize and store the generated situational expression information in the database 200 .
  • the composition support service device 100 may store situation expression information for each date corresponding to the user's diary format.
  • the composition support service device 100 may classify the situation expression information according to situations felt by the user, classify the situation expression information according to categories, and store the situation expression information.
  • Situational expression information can be classified and stored in various ways. It can be classified and stored in categories by emotion, such as emotions felt by the user, or classified and stored in categories by period or date. It may be classified into categories for each subject and stored.
  • the user's situation expression information stored in this way may be later provided to the user terminal 10 by extracting the situation expression information stored in the database 200 from the composition support service device 100 according to the user's request.
  • the composition support service device 100 may provide corresponding situation expression information to the social networking service 50 in association with the social networking service (SNS) 50 .
  • SNS social networking service
  • the composition support service device 100 may provide corresponding situation expression information to the social networking service 50 in association with the social networking service (SNS) 50 .
  • SNS social networking service
  • FIG. 2 will be described in more detail with reference to a configuration diagram of an embodiment of the composition support service device of the music composition system according to the present invention. .
  • the composition support service apparatus 100 may include an expression situation recognition unit 110, a music element extraction unit 120, a composition generation unit 130, a situation expression information generation unit 140, and the like.
  • the expression situation recognition unit 110 may receive information about various users' emotional states, daily life, etc. through the user terminal 10 and analyze them to recognize a situation that the user wants to express.
  • the music element extractor 120 may extract a musical element corresponding to the analyzed expression situation of the user.
  • a bass rhythm according to the user's expression situation may be extracted, and various melodies and harmonies corresponding to the user's expression situation may be extracted based on the bass rhythm.
  • the music element extractor 120 may extract music element information corresponding to the corresponding expression situation from the music element information previously stored in the database 200, or search the Internet network 30 to respond to the corresponding expression situation. You can also extract the music elements that are.
  • the composition generator 130 may generate a music composition by combining and processing the musical elements extracted by the music element extractor 120 .
  • various bass rhythms suitable for the user's expression situation are presented to the user so that the user can select the rhythm desired by the user, and various melodies and harmonies suitable for the user's expression situation are presented to the user to select the melody and harmony desired by the user. may be selected.
  • the composition generator 130 creates music compositions by combining and processing various melodies and harmonies to the bass rhythm suitable for the user's expression situation through artificial intelligence without the user's selection. You may.
  • composition generator 130 provides the music composition as a sheet music to the user terminal 10 so that the user can check it, and when the user directly selects or processes the desired melody and harmony, dissonance, pitch, tempo It is also possible to evaluate the appropriateness of musical elements such as etc. and provide feedback on them.
  • composition generator 130 may receive lyrics for a corresponding music composition through the user terminal 10 and match them to the music composition.
  • the context expression information generating unit 140 may generate context expression information suitable for the user's expression situation by matching additional information to the music composition generated by the composition generator 130 .
  • the context expression information generating unit 140 may select an instrument suitable for playing a musical composition from among various instruments or provide a list of instruments to the user to generate context expression information in which the musical composition is played with the user's selected instrument. there is.
  • the situation expression information generation unit 140 may generate situation expression information by generating an emotion graph or an emotion image corresponding to a music composition and matching it to the music composition. For example, situational expression information in which a picture or video of a user is matched together with a performance of a musical composition may be generated.
  • the situation expression information generation unit 140 may arrange and store the generated situation expression information by date corresponding to the expression situation of the user in the database 200 .
  • the corresponding situation expression information may be maintained and stored in the database 200 in the form of a diary, or the corresponding situation expression information may be classified and stored in the database 200 for each category of situation expression.
  • the context expression information stored in the database 200 may be provided to the user terminal 10 by extracting the context expression information generation unit 140 upon a user's request later.
  • the context expression information generator 140 may provide context expression information in association with the social networking service 50 .
  • FIG. 3 shows a block diagram of an embodiment of a database of a music composition system according to the present invention.
  • the database 200 may include a music element information storage unit 210, a situation expression information storage unit 250, and the like.
  • the music element information storage unit 210 may store music element information about rhythm, melody, harmony, and the like.
  • rhythm information can be classified and stored according to various emotions or situations as basic element information
  • various melodies and harmonies can be classified and stored according to various emotions or situations as auxiliary element information.
  • the situation expression information storage unit 250 may classify and store music compositions or situation expression information.
  • the situation expression information storage unit 250 may classify and store only music compositions generated by the composition support service device 100 by emotion, situation, and date, or may classify and store situation expression information in which various information is added to music compositions. It can also be classified and stored by date, situation, and date.
  • the present invention proposes a method of supporting music composition using the music composition system according to the present invention described above.
  • Figure 4 shows a flow diagram of one embodiment of a music composition method according to the present invention.
  • the composition support service device 100 may receive information about a situation to be expressed by the user through the user terminal 10 (S110).
  • the user's expression situation information may be text information such as words and sentences, humming information, or image information such as photos and videos.
  • the composition support service device 100 may recognize a situation to be expressed by the user based on the provided information on the expression situation of the user (S120). For example, artificial intelligence can determine the emotional state or situational state and recognize the situation the user wants to express through word and sentence analysis for text information, rhythm and melody analysis for humming information, or video analysis for photos or videos. can
  • the composition support service device 100 may extract bass rhythm information as a basic musical element according to a situation that the user wants to express (S130).
  • the composition support service device 100 may extract bass rhythm information from the database 200 for each emotion or situation, or may extract bass rhythm information from the Internet network 30 .
  • composition support service device 100 may extract a plurality of pieces of melody information or a plurality of pieces of harmony information corresponding to a situation that the user wants to express (S140).
  • the composition support service device 100 may extract various melody rhythm information or harmony information from the database 200 for each emotion or situation through artificial intelligence, or extract various melody rhythm information or harmony information from the Internet network 30. may be
  • the extracted melody may be in the form of a minimalism patterned melody, and romantic music or modern musical harmony composed only of basic triads that are not too classical may be extracted by reflecting the latest trend.
  • the composition support service apparatus 100 may recommend melodies and harmonies suitable for the user's expression situation to the user by determining a plurality of melodies or a plurality of harmonies to be recommended to the user based on the basic bass rhythm.
  • composition support service device 100 may present the extracted plurality of melodies to the user through the user terminal 10 and support the user to set the tempo, notes, number of bars, instruments, and the number of melodies.
  • composition support service apparatus 100 may support a user to select a melody most similar to his or her feelings among a plurality of melodies.
  • the composition support service device 100 may create a music composition by combining and processing (S150) one or more melodies or harmonies selected based on the base rhythm extracted through artificial intelligence (S160).
  • the composition support service device 100 may generate a music composition by combining one or more melodies or one or more harmonies selected in a bass rhythm through artificial intelligence for each measure and combining and arranging a plurality of measures.
  • composition support service device 100 provides a musical composition as a sheet music to the user terminal 10 so that the user can check it, and when the user directly selects or processes a desired melody and harmony, dissonance, pitch, tempo It is also possible to evaluate the appropriateness of musical elements such as etc. and provide feedback on them.
  • composition support service device 100 may receive lyrics for a corresponding music composition through the user terminal 10 and match them to the music composition.
  • the composition support service device 100 may provide the user with a list of instruments suitable for the generated music composition through the user terminal 10 and may generate situational expression information in which the music composition is played with the user's selected instrument.
  • the musical composition and situation expression information generated in this way may be provided to the user through the user terminal 10 or the like.
  • situation expression information may be generated by reflecting various states of a user in addition to a music composition. let's take a look
  • the composition support service device 100 creates a music composition through the above-described embodiment (S210), additionally creates an emotion graph or emotion image corresponding to the music composition (S220), and converts the emotion graph or emotion image to the music composition.
  • Situational expression information reflected in the performance of water may be generated (S230).
  • an emotion graph that changes in various ways according to the playing state of the music composition or various images expressing emotions may be created and inserted along with the performance of the music composition.
  • situational expression information may be created by inserting a user's photo, video, or text so that it can be displayed together with the performance of the music composition.
  • composition support service device 100 may accumulate and store situational expression information in the form of a diary in the database 200 . At this time, a date or a user's comment may be matched with the situation expression information and stored in the database 200 .
  • composition support service device 100 may classify categories for each corresponding emotion or situation and store situational expression information in the database 200 .
  • Situational expression information can be classified and stored in various ways. It can be classified and stored in categories by emotion, such as emotions felt by the user, or classified and stored in categories by period or date. It may be classified into categories for each subject and stored.
  • composition support service device 100 may link the generated situational expression information or the situational expression information stored in the database 200 with the social networking service 50 (S250) and reflect them on the user's social network.
  • the user's situation expression information stored in this way may be provided to the user terminal 10 or the like according to the user's request in the future.
  • FIG. show
  • the composition support service device 100 organizes and stores the situation expression information reflecting the user's emotion or situation, and provides a category-by-category list of situation expression information upon a user's request through the user terminal 10 (S310).
  • a list may be provided by emotion category such as joy, anger, sadness, etc.
  • a list may be provided by category by period or date, or a category list by subject who feels emotion or situation may be provided.
  • the user may select a desired situation expression from the list provided to the user terminal 10 (S320), and the composition support service device 100 corresponds to the user's selection and corresponding situation expression information in the database 200. Can be selectively extracted (S330).
  • composition support service device 100 may provide the extracted context expression information to the user terminal 10 to reproduce the context expression information (S340).
  • the user can feel the emotion or situation felt at that time again as it is by displaying the text, photo, video, etc. together with the music.
  • music can overcome the limitation that the feelings or situations cannot be conveyed in a sympathetic way to the reader.
  • composition results can be generated to suit the emotion or situation at the time, so that the user's emotion or expression of the situation can be faithfully reflected.
  • the user can feel the emotions felt at that time more effectively by replaying the musical results made by the user later.
  • composition support service device 100: composition support service device
  • composition generation unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Primary Health Care (AREA)
  • Acoustics & Sound (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Educational Technology (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Educational Administration (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Auxiliary Devices For Music (AREA)

Abstract

Est divulgué un procédé de composition de musique par l'intelligence artificielle de telle sorte que l'état émotionnel et la vie quotidienne d'un utilisateur peuvent être exprimés de manière musicale.
PCT/KR2023/002837 2022-03-02 2023-03-02 Procédé de composition de musique utilisant l'intelligence artificielle WO2023167496A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020220026904A KR102492074B1 (ko) 2022-03-02 2022-03-02 인공지능을 이용한 음악 작곡 방법
KR10-2022-0026904 2022-03-02

Publications (1)

Publication Number Publication Date
WO2023167496A1 true WO2023167496A1 (fr) 2023-09-07

Family

ID=85110153

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/002837 WO2023167496A1 (fr) 2022-03-02 2023-03-02 Procédé de composition de musique utilisant l'intelligence artificielle

Country Status (2)

Country Link
KR (1) KR102492074B1 (fr)
WO (1) WO2023167496A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102492074B1 (ko) * 2022-03-02 2023-01-26 손다영 인공지능을 이용한 음악 작곡 방법

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011175006A (ja) * 2010-02-23 2011-09-08 Sony Corp 情報処理装置、自動作曲方法、学習装置、学習方法、及びプログラム
KR20120060085A (ko) * 2010-12-01 2012-06-11 주식회사 싸일런트뮤직밴드 자동작곡시스템 및 이를 이용한 자동작곡방법, 그 방법이 기록된 기록매체
KR20180130153A (ko) * 2017-05-29 2018-12-07 한양대학교 에리카산학협력단 작곡 과정별 히스토리를 이용한 자동 작곡 방법 및 장치
KR20210033850A (ko) * 2019-09-19 2021-03-29 주식회사 세미콘네트웍스 목소리 및 얼굴 안면 감정값의 산출 방법 및 이를 이용한 인공지능 스피커의 출력 방법
KR102492074B1 (ko) * 2022-03-02 2023-01-26 손다영 인공지능을 이용한 음악 작곡 방법

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101886534B1 (ko) 2016-12-16 2018-08-09 아주대학교산학협력단 인공지능을 이용한 작곡 시스템 및 작곡 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011175006A (ja) * 2010-02-23 2011-09-08 Sony Corp 情報処理装置、自動作曲方法、学習装置、学習方法、及びプログラム
KR20120060085A (ko) * 2010-12-01 2012-06-11 주식회사 싸일런트뮤직밴드 자동작곡시스템 및 이를 이용한 자동작곡방법, 그 방법이 기록된 기록매체
KR20180130153A (ko) * 2017-05-29 2018-12-07 한양대학교 에리카산학협력단 작곡 과정별 히스토리를 이용한 자동 작곡 방법 및 장치
KR20210033850A (ko) * 2019-09-19 2021-03-29 주식회사 세미콘네트웍스 목소리 및 얼굴 안면 감정값의 산출 방법 및 이를 이용한 인공지능 스피커의 출력 방법
KR102492074B1 (ko) * 2022-03-02 2023-01-26 손다영 인공지능을 이용한 음악 작곡 방법

Also Published As

Publication number Publication date
KR102492074B1 (ko) 2023-01-26

Similar Documents

Publication Publication Date Title
WO2018030672A1 (fr) Procédé et système de consultation d'automatisation de robot pour la consultation avec un client selon un scénario prédéterminé en utilisant un apprentissage automatique
WO2016006727A1 (fr) Dispositif et procédé d'essai de fonction cognitive
WO2023167496A1 (fr) Procédé de composition de musique utilisant l'intelligence artificielle
WO2011136425A1 (fr) Dispositif et procédé de mise en réseau de cadre de description de ressources à l'aide d'un schéma d'ontologie comprenant un dictionnaire combiné d'entités nommées et des règles d'exploration combinées
WO2020085663A1 (fr) Système de génération automatique de logos basée sur l'intelligence artificielle et procédé de service de génération de logos l'utilisant
WO2016060296A1 (fr) Appareil pour enregistrement d'informations audio et son procédé de commande
WO2019112145A1 (fr) Procédé, dispositif et système de partage de photographies d'après une reconnaissance vocale
WO2019031650A1 (fr) Procédé de fourniture d'un accompagnement en fonction d'une mélodie de fredonnement d'un utilisateur et appareil correspondant
WO2016035970A1 (fr) Systeme publicitaire utilisant une recherche de publicite
WO2020253115A1 (fr) Procédé, appareil et dispositif de recommandation de produit basés sur une reconnaissance vocale et support de stockage
WO2023282459A1 (fr) Procédé et dispositif de fourniture de service de stylisme utilisant une construction de base de données basée sur une chaîne de blocs
WO2011162444A1 (fr) Dictionnaire d'entités nommées combiné avec un schéma d'ontologie et dispositif et procédé permettant de renouveler un dictionnaire d'entités nommées ou une base de données de règles d'exploration à l'aide d'une règle d'exploration
WO2022145946A1 (fr) Système et procédé d'apprentissage de langue sur la base d'images de formation recommandées par intelligence artificielle et de phrases illustratives
WO2021167220A1 (fr) Procédé et système pour générer automatiquement une table des matières pour une vidéo sur la base de contenus
WO2015102125A1 (fr) Système et procédé de conversation de texto
WO2016182400A1 (fr) Dispositif mobile et système muni d'un écran d'informations de communication et de fonctions d'accès et procédé associé
WO2024101754A1 (fr) Système de fourniture d'un service de tutorat en mathématiques basé sur l'ia et pouvant effectuer une classification automatique de thème et de niveau de difficulté et une réédition d'une question de mathématiques, et procédé d'application dudit système
WO2023136644A1 (fr) Serveur de recherche de documents de brevets pour assurer une fonction de génération de noms de fichiers personnalisés par l'utilisateur lors du téléchargement d'un document, et procédé de recherche de documents de brevets l'utilisant
WO2014204074A1 (fr) Procédé de génération et d'extraction d'un document électronique et support d'enregistrement à cet effet
WO2019124575A1 (fr) Procédé de support d'apprentissage de langue et serveur de support d'apprentissage de langue utilisant un doublage de voix
WO2023146030A1 (fr) Dispositif, procédé et programme d'interaction basés sur l'intelligence artificielle et intégrant une émotion, un degré de concentration et une conversation
WO2022177372A1 (fr) Système de fourniture de service de tutorat à l'aide d'une intelligence artificielle et son procédé
WO2022103068A1 (fr) Procédé et dispositif de création de profil de client à base de dialogue en ligne par l'intermédiaire de multiples représentants
WO2011136454A1 (fr) Système et procédé de génération de source sonore en utilisant une image
CN114189738B (zh) 音效合成方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23763695

Country of ref document: EP

Kind code of ref document: A1