CN117708347A - Method and system for outputting multi-mode result by large model based on API (application program interface) endpoint - Google Patents

Method and system for outputting multi-mode result by large model based on API (application program interface) endpoint Download PDF

Info

Publication number
CN117708347A
CN117708347A CN202311717511.0A CN202311717511A CN117708347A CN 117708347 A CN117708347 A CN 117708347A CN 202311717511 A CN202311717511 A CN 202311717511A CN 117708347 A CN117708347 A CN 117708347A
Authority
CN
China
Prior art keywords
data
large model
image
api
endpoint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311717511.0A
Other languages
Chinese (zh)
Inventor
王伟
贾惠迪
邹克旭
黄思
郭东宸
常鹏慧
孙悦丽
朱珊娴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yingshi Ruida Technology Co ltd
Original Assignee
Beijing Yingshi Ruida Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yingshi Ruida Technology Co ltd filed Critical Beijing Yingshi Ruida Technology Co ltd
Priority to CN202311717511.0A priority Critical patent/CN117708347A/en
Publication of CN117708347A publication Critical patent/CN117708347A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to the technical field of data fusion, in particular to a large model output multi-mode result method based on an API (application program interface) endpoint. The user does not need to have professional drawing knowledge, and through interaction with the large model, customized pictures can be generated according to own requirements. Especially in the picture fine tuning stage, the real-time preview function of the large model provides more visual and convenient interaction experience for users, so that the users can control the drawing process more finely, and meanwhile, the analysis of the data can be carried out by means of the analysis capability of the large model.

Description

Method and system for outputting multi-mode result by large model based on API (application program interface) endpoint
Technical Field
The invention relates to the technical field of data fusion, in particular to a large model output multi-mode result method based on an API (application program interface) endpoint.
Background
The knowledge graph generation method is to fuse data of different modes into the knowledge graph, and the multi-mode data is realized by associating the multi-mode data with entities, attributes and relations in the graph. For example, associating an image with an entity in a map and storing image features as entity attributes. Features are extracted from multimodal data to enrich information in the atlas by using techniques such as computer vision, natural language processing, and audio processing to extract meaningful features of images, text, or audio. In the knowledge graph, the multi-mode data is combined with the structure of the knowledge graph through associating and linking the entities, the attributes and the relations of the different-mode data. In this way, entities in the atlas may contain multi-modal data at the same time, and associations and relationships between different modalities may be modeled and represented.
The transform-based model may be used to generate multi-modal results, and although originally designed for natural language processing tasks, has been extended for use in multi-modal scenarios such as images, audio and video. The transducer uses a self-attention mechanism to capture the context in the input sequence. It encodes the inputs for global context awareness by calculating the associated weights between each input location and the other locations. The encoder is responsible for encoding the input sequence, and the decoder generates an output sequence based on the output of the encoder and the context information. The transducer captures different semantic information by introducing multiple attention headers. Each attention header may focus on a different part of the input sequence and provide a representation of the features of the diversity. To preserve position information in the sequence, a transducer introduces position coding, embedding each position in the sequence into the feature representation. In generating the multi-modality results, a transducer may be applied to the data encoding and decoding process of each modality. For example, for a multimodal generation task of images and text, the images and text may be taken as input sequences, respectively, and encoded and decoded using a transducer model to generate a multimodal result.
The generation of multi-modal results of existing knowledge-graphs depends on the integrity and quality of the input data. If the coverage of the multimodal data is not wide or if the data is noisy or erroneous, the accuracy and integrity of the generated multimodal results may be compromised. In knowledge graph, associating and linking multimodal data with entities, attributes, and relationships is a critical step. However, for complex multimodal data, there may be difficulties in making accurate associations and links, especially when the amount of data is large, semantic similarity is low, or heterogeneous data sources are present.
Disclosure of Invention
In view of the above problems of the existing knowledge graph generation of multi-modal results, the invention provides a method for outputting multi-modal results based on a large model of an API endpoint, which comprises the following specific processes: acquiring an input text, classifying the sensitivity level of the input text, and creating a drawing option API endpoint according to the sensitivity level; calling a drawing option API through the large model to draw pictures; acquiring an adjustment text, transmitting the adjustment text to an API endpoint of a drawing option by the large model, and drawing the picture again; and the large model logically infers the input text and the adjustment text, and outputs a functional result by combining the generated pictures.
Creating a drawing option API endpoint specifically includes: 1) Selecting a back-end framework and constructing an API; the back-end framework can select any one of Django, flash, express. Js, ruby on Rails, spring Boot and ASP. NET; 2) Defining routes that include one or more routes to process the image generation requests, and each route will correspond to a different endpoint, each endpoint for a different type of image generation request; 3) And analyzing the request parameters sent from the front end, and generating the image in a self-defining way according to the parameters.
The large model transmitting the adjustment text to the drawing option API endpoint specifically includes: 1) Analyzing request parameters sent from the front end; the parameters include color, location, title; 2) Generating custom images according to the parameters; 3) Selecting different drawing libraries according to the analyzed parameters, generating images by using the selected drawing libraries, and storing the images; the URL of the custom generated image or the image data is returned to the front end as a response; if the image is stored on the server, returning to the URL of the image; if the image data needs to be embedded in the response, the image data can be encoded into a Base64 format and returned in a JSON response;
in front-end use, the back-end API endpoint may be invoked by initiating a POST request, and receive the image URL or image data in the response, and then display the image on the user interface.
The sensitivity level of the classified input text comprises: the integrated sensitivity score is calculated using a weighted average method: comprehensive sensitivity score= (PII number score×weight 1) + (field sensitivity score×weight 2) + (data access score×weight 3)
Wherein PII is a number score, range: 0 to 1,1 means that all data contains PII; PII represents information that can be used to uniquely identify, contact or locate a person's identity; field sensitivity score, range: 0 to 1,1 means that all fields are highly sensitive; data access score, range: 0 to 1,1 means that data is often widely accessed; weight 1, weight 2, weight 3) are defined according to the needs and policies of the organization, which are used to determine the importance of different indicators; the score ranges were divided into four classes of non-sensitive (0-0.3), low sensitive (0.31-0.6), medium sensitive (0.61-0.8), high sensitive (0.81-1).
The method and the device realize conversion of text description and selected options input by a user and actual images. The user does not need to have professional drawing knowledge, and through interaction with the large model, customized pictures can be generated according to own requirements. Especially in the picture fine tuning stage, the real-time preview function of the large model provides more visual and convenient interaction experience for users, so that the users can control the drawing process more finely, and meanwhile, the analysis of the data can be carried out by means of the analysis capability of the large model.
The beneficial effects of this application are: by utilizing large models and multimodal data, information of multiple data types (e.g., images, text, audio, etc.) can be synthesized. This allows the results to be more rich, diversified, and capable of fully presenting data from different perspectives. Enhancement of modality fusion capability: the multi-modal result is generated by combining the large model with the API endpoint, so that the characteristic information of different modal data can be fused better. The large model has strong feature extraction and expression capability, and can better capture the association and shared information between the modes, thereby improving the effect of mode fusion. Accelerating the generation process: by using the API endpoint to generate a result, the computing resource and the parallel processing capability of the cloud computing platform can be fully utilized, and the generation process is accelerated. Thus, the multi-mode result can be obtained more quickly, and the generation efficiency and the real-time performance are improved.
Drawings
FIG. 1 is a schematic diagram of an example interface;
FIG. 2 is a data user feedback schematic;
FIG. 3 is a schematic drawing of a picture;
FIG. 4 is a diagram illustrating fine tuning of a picture.
Detailed Description
In order to better understand the technical solutions of the present application, the following are combined with the accompanying drawings
The figures and the preferred embodiments illustrate the invention in further detail.
Step one: a user interface is constructed. Providing an input interface for a user, wherein the input interface comprises a text input interface and is responsible for accepting text or digital input of the user; selecting the options interface, the user may select a specific type of drawing picture, such as a histogram, a line graph, a pie chart, and so on. An example of an interface is shown in fig. 1.
Step two: and (5) feeding back data users. If the user has directly entered data in the text input interface, step three is performed. If the user does not input specific data, after the user inputs the problem, the large model displays the queried data to the user, so that the user determines to select the data to use for drawing, and the user inputs the selected data through the text input interface. This process may be repeated until the user selects a particular data.
As an example, a user enters at a text input interface: the large model of "plotting concentration change of Beijing PM 2.5" gives the Beijing PM2.5 concentration for approximately 7 days and the PM2.5 concentration for approximately 24 hours a day. The user selects the Beijing PM2.5 concentration of approximately 7 days as the drawing data, and inputs "the Beijing PM2.5 concentration of approximately 7 days is drawn" in the text input interface.
Meanwhile, the sensitivity level of the data is classified according to the specific data input by the user, and the sensitivity level of the data is considered when the data is inquired. By assigning different sensitivity levels to different types of data, the system can restrict the user's access rights to particular sensitive data based on these levels to ensure that sensitive information is not accessed by unauthorized users. Wherein, data classification and sensitivity level are defined as: first, the system needs to sort the data and assign a corresponding sensitivity level to each category. The classification of data may be based on the content, type, privacy properties, etc. of the data. For example, personal identity information, financial data, and medical records are generally considered highly sensitive data, while published news articles or general statistics may be very low sensitive data.
Second, a system administrator or data manager may define individual sensitivity levels and ensure that all relevant personnel know the meaning of these levels. Typically, sensitivity levels are divided into multiple layers, for example: non-sensitive, low sensitive, medium sensitive, high sensitive, etc. When defining sensitivity levels, legal regulations, industry standards, and internal policies of the organization need to be considered to ensure compliance. The system needs to implement strict access control strategies, and limits the access rights of users to data with different sensitivity levels according to the identity, roles and requirements of the users. Ensuring that a user must log into the system through valid authentication (e.g., user name and password, multi-factor authentication, etc.).
Users are assigned to different roles or sets of permissions, each role having a particular data access permission. For example, a non-sensitive data role, a low sensitive data role, a medium sensitive data role, and a high sensitive data role may be created. When a user initiates a query request, the system may determine whether the user has permission to query data of a particular sensitivity level based on the user's role and the integrated sensitivity score. If the user's level meets the level requirements of the data, the query is allowed. Otherwise, the query request is denied.
Sensitivity calculation:
for a set of data, sensitivity calculations are performed considering the following 3 points.
(1) PII number score (range: 0 to 1,1 indicates that all data contains PII).
(2) The field sensitivity score (range: 0 to 1,1 indicates that all fields are highly sensitive).
(3) Data access scores (range: 0 to 1,1 indicates that data is frequently accessed extensively).
The integrated sensitivity score is calculated using a weighted average method:
comprehensive sensitivity score= (PII number score×weight 1) + (field sensitivity score×weight 2) + (data access score×weight 3)
PII (Personally Identifiable Information, personal identification information) refers to information that can be used to uniquely identify, contact or locate a person.
In this formula, weights (weight 1, weight 2, weight 3) are defined according to the needs and policies of the organization for determining the importance of different metrics. For example, if the number of PIIs is critical to the organization, the weight 1 may be set relatively high.
The score ranges were divided into four classes of non-sensitive (0-0.3), low sensitive (0.31-0.6), medium sensitive (0.61-0.8), high sensitive (0.81-1).
Step three: a drawing option API endpoint is created. And designing an API endpoint of the drawing option according to the provided selection option. Parameters such as colors, object positions, picture titles, coordinate axis titles and the like used for drawing pictures are contained in the drawing option API end points and are used for fine adjustment of subsequent pictures.
Selecting a back end frame: first, a back-end framework is selected to build the API. Common choices include Django, flash (Python), express.js (node. Js), ruby on Rails (Ruby), spring Boot (Java), ASP.NET (C#), and the like.
Defining a route: one or more routes are defined to process the image generation request, such as @ app. Route ('/generate_character', methods= [ 'POST' ]). These routes will correspond to different endpoints, each for a different type of image generation request. In the processing function, the request parameters sent from the front end are parsed. These parameters will include information such as color, location, title, etc., as acquired 'color', 'position', from which the image is custom generated. Based on the parsed parameters, images are generated using a selected chart library (e.g., matplotlib, plotly, d3.Js, etc.). Different chart libraries have different generation methods and APIs, so images are generated according to the selection.
Defining examples with routes in flash
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route('/generate_chart', methods=['POST'])
def generate_chart():
# processing image generation request
Parameters in the# resolution request
data = request.json
data = data['x','y']
color = data['color']
position = data['position']
Other parameter resolution
Generation of images using a gallery
# use Matplotlib example here
import matplotlib.pyplot as plt
Code for # generating image
plt.plot(data['x','y'], color=color)
plt.title(data['title'])
plt.xlabel(data['x_axis_title'])
plt.ylabel(data['y_axis_title'])
# save image to server
plt.savefig('chart.png')
plt.close()
Return image URL or image data as response
Return jsonify({'image_url': 'http://example.com/chart.png'})
Saving an image: once the image is generated, it is saved to a designated directory on the server for subsequent access by a URL, such as 'http:// sample. Com/character. Png'.
And returning the URL of the generated image or the image data to the front end as a response. If the image is stored on the server, returning to the URL of the image; if it is desired to embed image data in the response, the image data can be encoded into Base64 format and returned in the JSON response. In the front-end application, the back-end API endpoint is invoked by initiating a POST request, and receives the image URL or image data in the response, and then displays the image on the user interface.
Step four: and (5) drawing pictures. And calling a drawing option API through the large model to draw the picture.
The user selects Beijing PM2.5 concentration of nearly 7 days, and selects a line graph at the selection option interface, and the large model calls a line graph drawing option API to draw a picture.
Step five: and (5) fine-tuning the picture. The user inputs parameters such as colors, object positions and the like of the pictures to be adjusted to the large model through the text input interface, and the large model transmits the parameters to the drawing option API endpoint to draw again and present the parameters to the user. The user may make further adjustments based on the live preview until satisfied.
User input: the color of the line is red, the icon is entitled "Beijing PM2.5 concentration Change approximately 7 days", the abscissa is entitled "time", and the ordinate is entitled "concentration". The large model calls the drawing option API again to draw the picture.
Step six: and (5) data analysis. The large model can logically infer data input by a user, combine the data with knowledge, and output functional results by combining the generated pictures.
The large model analyzes the data trend of 7 days
It can be seen that there was some fluctuation in PM2.5 concentration over nearly 7 days, but there was no significant continuous rise or fall trend. The data varied between 32 and 67.
Average concentration: the mean PM2.5 concentration for the near 7 days is: (45+34+32+56+67+47+48)/7=44.57.
Peak and valley: the data shows that the peak PM2.5 concentration is 67 and appears at 2023-08-24 during the 7 days. And the lowest value is 32, which occurs at 2023-08-22.
Volatility: the amplitude of the change in the data is relatively small, and the difference between the maximum and minimum values is 67-32=35, indicating that the fluctuation in PM2.5 concentration during this period is relatively small.

Claims (6)

1. A method for outputting multi-modal results based on a large model of an API endpoint, comprising:
acquiring an input text, classifying the sensitivity level of the input text, and creating a drawing option API endpoint according to the sensitivity level;
calling a drawing option API through the large model to draw pictures;
acquiring an adjustment text, transmitting the adjustment text to an API endpoint of a drawing option by the large model, and drawing the picture again;
and the large model logically infers the input text and the adjustment text, and outputs a functional result by combining the generated pictures.
2. The method for outputting multimodal results based on the large model of API endpoint as in claim 1, wherein creating the drawing option API endpoint comprises:
1) Selecting a back-end framework and constructing an API; the back-end framework can select any one of Django, flash, express. Js, ruby on Rails, spring Boot and ASP. NET;
2) Defining routes that include one or more routes to process the image generation requests, and each route will correspond to a different endpoint, each endpoint for a different type of image generation request;
3) And analyzing the request parameters sent from the front end, and generating the image in a self-defining way according to the parameters.
3. The method for outputting multimodal results by a large model based on an API endpoint as recited in claim 1, wherein the large model transmitting the adjustment text to the drawing option API endpoint comprises:
1) Analyzing request parameters sent from the front end; the parameters include color, location, title;
2) Generating custom images according to the parameters;
3) And selecting different drawing libraries according to the analyzed parameters, generating images by using the selected drawing libraries, and storing the images.
4. The API endpoint-based large model outputting multimodal results method as claimed in claim 2, further comprising returning URL of custom generated image or image data as a response to the front end; if the image is stored on the server, returning to the URL of the image; if the image data needs to be embedded in the response, the image data can be encoded into a Base64 format and returned in a JSON response;
in front-end use, the back-end API endpoint may be invoked by initiating a POST request, and receive the image URL or image data in the response, and then display the image on the user interface.
5. The API endpoint-based large model outputting multimodal results method as claimed in claim 1, wherein classifying the sensitivity level of the input text comprises:
the integrated sensitivity score is calculated using a weighted average method:
comprehensive sensitivity score= (PII number score x weight 1) + (field sensitivity score x weight 2) + (data access score x weight 3);
wherein PII is a number score, range: 0 to 1,1 means that all data contains PII; PII represents information that can be used to uniquely identify, contact or locate a person's identity; field sensitivity score, range: 0 to 1,1 means that all fields are highly sensitive; data access score, range: 0 to 1,1 means that data is often widely accessed; weight 1, weight 2, weight 3) are defined according to the needs and policies of the organization, which are used to determine the importance of different indicators; the score ranges were divided into four classes of non-sensitive (0-0.3), low sensitive (0.31-0.6), medium sensitive (0.61-0.8), high sensitive (0.81-1).
6. A large model output multi-modal result system based on API endpoints, comprising: a memory and a processor; the memory has stored thereon a computer program which, when executed by the processor, implements the API endpoint-based large model output multimodal results method of any of claims 1 to 5.
CN202311717511.0A 2023-12-14 2023-12-14 Method and system for outputting multi-mode result by large model based on API (application program interface) endpoint Pending CN117708347A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311717511.0A CN117708347A (en) 2023-12-14 2023-12-14 Method and system for outputting multi-mode result by large model based on API (application program interface) endpoint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311717511.0A CN117708347A (en) 2023-12-14 2023-12-14 Method and system for outputting multi-mode result by large model based on API (application program interface) endpoint

Publications (1)

Publication Number Publication Date
CN117708347A true CN117708347A (en) 2024-03-15

Family

ID=90149247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311717511.0A Pending CN117708347A (en) 2023-12-14 2023-12-14 Method and system for outputting multi-mode result by large model based on API (application program interface) endpoint

Country Status (1)

Country Link
CN (1) CN117708347A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111063006A (en) * 2019-12-16 2020-04-24 北京亿评网络科技有限公司 Image-based literary work generation method, device, equipment and storage medium
CN113742460A (en) * 2020-05-28 2021-12-03 华为技术有限公司 Method and device for generating virtual role
CN113762237A (en) * 2021-04-26 2021-12-07 腾讯科技(深圳)有限公司 Text image processing method, device and equipment and storage medium
US20220114463A1 (en) * 2020-10-14 2022-04-14 Openstream Inc. System and Method for Multi-modality Soft-agent for Query Population and Information Mining
US20220164548A1 (en) * 2020-11-24 2022-05-26 Openstream Inc. System and Method for Temporal Attention Behavioral Analysis of Multi-Modal Conversations in a Question and Answer System
CN116186312A (en) * 2022-12-29 2023-05-30 北京霍因科技有限公司 Multi-mode data enhancement method for data sensitive information discovery model
US20230306131A1 (en) * 2022-02-15 2023-09-28 Qohash Inc. Systems and methods for tracking propagation of sensitive data
CN116881462A (en) * 2023-07-31 2023-10-13 阿里巴巴(中国)有限公司 Text data processing, text representation and text clustering method and equipment
CN116932708A (en) * 2023-04-18 2023-10-24 清华大学 Open domain natural language reasoning question-answering system and method driven by large language model
CN116994069A (en) * 2023-09-22 2023-11-03 武汉纺织大学 Image analysis method and system based on multi-mode information
CN116992010A (en) * 2023-08-02 2023-11-03 无知(北京)智慧科技有限公司 Content distribution and interaction method and system based on multi-mode large model
CN117057318A (en) * 2023-08-17 2023-11-14 亚信科技(中国)有限公司 Domain model generation method, device, equipment and storage medium
CN117114112A (en) * 2023-10-16 2023-11-24 北京英视睿达科技股份有限公司 Vertical field data integration method, device, equipment and medium based on large model

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111063006A (en) * 2019-12-16 2020-04-24 北京亿评网络科技有限公司 Image-based literary work generation method, device, equipment and storage medium
CN113742460A (en) * 2020-05-28 2021-12-03 华为技术有限公司 Method and device for generating virtual role
US20220114463A1 (en) * 2020-10-14 2022-04-14 Openstream Inc. System and Method for Multi-modality Soft-agent for Query Population and Information Mining
US20220164548A1 (en) * 2020-11-24 2022-05-26 Openstream Inc. System and Method for Temporal Attention Behavioral Analysis of Multi-Modal Conversations in a Question and Answer System
CN113762237A (en) * 2021-04-26 2021-12-07 腾讯科技(深圳)有限公司 Text image processing method, device and equipment and storage medium
US20230306131A1 (en) * 2022-02-15 2023-09-28 Qohash Inc. Systems and methods for tracking propagation of sensitive data
CN116186312A (en) * 2022-12-29 2023-05-30 北京霍因科技有限公司 Multi-mode data enhancement method for data sensitive information discovery model
CN116932708A (en) * 2023-04-18 2023-10-24 清华大学 Open domain natural language reasoning question-answering system and method driven by large language model
CN116881462A (en) * 2023-07-31 2023-10-13 阿里巴巴(中国)有限公司 Text data processing, text representation and text clustering method and equipment
CN116992010A (en) * 2023-08-02 2023-11-03 无知(北京)智慧科技有限公司 Content distribution and interaction method and system based on multi-mode large model
CN117057318A (en) * 2023-08-17 2023-11-14 亚信科技(中国)有限公司 Domain model generation method, device, equipment and storage medium
CN116994069A (en) * 2023-09-22 2023-11-03 武汉纺织大学 Image analysis method and system based on multi-mode information
CN117114112A (en) * 2023-10-16 2023-11-24 北京英视睿达科技股份有限公司 Vertical field data integration method, device, equipment and medium based on large model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANDING ZHANG: ""AI decide:Text-to_Image Generation Transformer"", 《RESEARCH GATE》, 29 February 2020 (2020-02-29), pages 1 - 8 *
张仲伟;曹雷;陈希亮;寇大磊;宋天挺;: "基于神经网络的知识推理研究综述", 计算机工程与应用, no. 12, 25 March 2019 (2019-03-25), pages 1 - 5 *
谢波: ""ChatGPT 网络安全风险的 形成机制及其应对路径"", 《国家安全论坛》, 31 May 2023 (2023-05-31), pages 17 - 33 *

Similar Documents

Publication Publication Date Title
US11811805B1 (en) Detecting fraud by correlating user behavior biometrics with other data sources
US10372772B2 (en) Prioritizing media based on social data and user behavior
KR102151328B1 (en) Order clustering and method and device to combat malicious information
US8904493B1 (en) Image-based challenge-response testing
US11870741B2 (en) Systems and methods for a metadata driven integration of chatbot systems into back-end application services
CN102365645A (en) Organizing digital images by correlating faces
CN109255037B (en) Method and apparatus for outputting information
CN110263214A (en) Generation method, device, server and the storage medium of video title
EP3852007B1 (en) Method, apparatus, electronic device, readable storage medium and program for classifying video
US11601391B2 (en) Automated image processing and insight presentation
US11315010B2 (en) Neural networks for detecting fraud based on user behavior biometrics
US11886556B2 (en) Systems and methods for providing user validation
US11250039B1 (en) Extreme multi-label classification
CN110505513A (en) A kind of video interception method, apparatus, electronic equipment and storage medium
CN113544682A (en) Data privacy using a Podium mechanism
CN113157956B (en) Picture searching method, system, mobile terminal and storage medium
US11876634B2 (en) Group contact lists generation
CN117708347A (en) Method and system for outputting multi-mode result by large model based on API (application program interface) endpoint
JP7425126B2 (en) Mute content across platforms
CN115510032A (en) Database behavior analysis method and system based on machine learning
JP2004192555A (en) Information management method, device and program
CN110209880A (en) Video content retrieval method, Video content retrieval device and storage medium
KR102662884B1 (en) User interaction methods, devices, devices and media
WO2022178238A1 (en) Live updates in a networked remote collaboration session
CN110443202B (en) System, method and storage medium for real-time analysis of paper font regularity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination