WO2019233365A1 - 一种数据处理方法、装置、电子设备和可读介质 - Google Patents

一种数据处理方法、装置、电子设备和可读介质 Download PDF

Info

Publication number
WO2019233365A1
WO2019233365A1 PCT/CN2019/089772 CN2019089772W WO2019233365A1 WO 2019233365 A1 WO2019233365 A1 WO 2019233365A1 CN 2019089772 W CN2019089772 W CN 2019089772W WO 2019233365 A1 WO2019233365 A1 WO 2019233365A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
information
program
sharing
classification
Prior art date
Application number
PCT/CN2019/089772
Other languages
English (en)
French (fr)
Inventor
王政华
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2019233365A1 publication Critical patent/WO2019233365A1/zh
Priority to US17/108,996 priority Critical patent/US20210150243A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/235Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on user input or interaction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/30Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • H04L51/046Interoperability with other network applications or services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/216Handling conversation history, e.g. grouping of messages in sessions or threads

Definitions

  • the present application relates to the field of computer technology, and in particular, to a data processing method, a data processing device, an electronic device, and a machine-readable medium.
  • terminal devices With the development of terminal technology, more and more users use terminal devices to perform required operations, such as querying information through a browser, information sharing and interaction through social software, and communication through instant messaging software.
  • users In the process of browsing information, users sometimes encounter information such as pictures that they are interested in, and then they can be stored in the terminal device, and then open the corresponding software program to share, such as sending to friends in the communication program, as in the shopping program Search for the corresponding product, etc.
  • the embodiments of the present application provide a data processing method to improve information processing efficiency.
  • the embodiments of the present application further provide a data processing device, an electronic device, and a machine-readable medium to ensure the implementation and application of the foregoing method.
  • an embodiment of the present application discloses a data processing method, including: acquiring image data; identifying classification information of the image data, and determining corresponding sharing operation information according to the classification information; and calling according to the sharing operation information
  • a corresponding program uses the program to publish the image data.
  • An embodiment of the present application further discloses a data processing apparatus including: an acquisition module for acquiring image data; an identification module for identifying classification information of the image data, and determining corresponding sharing operation information according to the classification information; sharing A module for calling a corresponding program according to the sharing operation information, and using the program to publish the image data.
  • An embodiment of the present application further discloses an electronic device, including: a processor; and a memory, which stores executable code, and when the executable code is executed, causes the processor to execute as in the embodiment of the present application.
  • an electronic device including: a processor; and a memory, which stores executable code, and when the executable code is executed, causes the processor to execute as in the embodiment of the present application.
  • the embodiment of the present application further discloses one or more machine-readable media having executable code stored thereon, and when the executable code is executed, the processor is executed as described in one or more of the embodiments of the present application. Data processing methods.
  • An embodiment of the present application also discloses an operating system for an electronic device, including: a processing unit that acquires image data; identifies classification information of the image data, and determines corresponding sharing operation information according to the classification information; the sharing unit, according to The sharing operation information calls a corresponding program, and uses the program to publish the image data.
  • the embodiments of the present application include the following advantages:
  • the image data to be shared can be obtained, and then classification information of the image data is determined, such as determining the classification of the content in the image, and the corresponding sharing operation information is determined based on the classification information, and then the sharing operation information can be determined according to the classification information.
  • classification information of the image data is determined, such as determining the classification of the content in the image
  • the corresponding sharing operation information is determined based on the classification information
  • the sharing operation information can be determined according to the classification information.
  • FIG. 1 is a schematic diagram of an image sharing process in an embodiment of the present application
  • FIG. 2 is a schematic interface diagram of a sharing process according to an embodiment of the present application.
  • FIG. 3 is a schematic interface diagram of another sharing process according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an image capture example in the embodiment of the present application.
  • FIG. 5 is a schematic processing diagram of a classifier in an embodiment of the present application.
  • FIG. 6 is a schematic processing diagram of a data analyzer according to an embodiment of the present application.
  • FIG. 7 is a flowchart of steps in an embodiment of a data processing method of the present application.
  • FIG. 9 is a schematic structural diagram of a system module in an embodiment of the present application.
  • FIG. 10 is a structural block diagram of an embodiment of a data processing apparatus of the present application.
  • FIG. 11 is a structural block diagram of another embodiment of a data processing apparatus of the present application.
  • FIG. 12 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
  • FIG. 13 is a schematic diagram of a hardware structure of an electronic device according to another embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of an operating system according to an embodiment of the present application.
  • An embodiment of the present application proposes a data processing method that can automatically identify corresponding classification information of the acquired image data.
  • the classification information can be identified based on the content contained in the image data, and then the corresponding sharing operation information is determined based on the classification information.
  • the image data is released according to the sharing operation information, so that the sharing operation of the image data can be automatically determined and the sharing can be performed, thereby improving the information processing efficiency.
  • sharing refers to common use.
  • Image data can be used with other applications and can also be displayed to other users for viewing, such as searching for products in images in a shopping program, and sending it to a communication program. Friends, etc.
  • FIG. 1 is a schematic diagram of an image sharing process in an embodiment of the present application.
  • Image data may be acquired in step 102.
  • There are multiple sources for obtaining image data for example, it can be downloaded from the network, it can be obtained locally, it can be obtained by taking pictures, it can also be downloaded from the program, or it can be obtained by capturing screen images.
  • the step of capturing the image data obtained by the interception method includes: intercepting the screen image according to the instruction information to generate corresponding image data.
  • the terminal device if the user is interested in the content displayed by the terminal device, he can issue instructions through various operations, and then intercept the screen image according to the instruction information to obtain corresponding image data, such as intercepting the entire screen image or screen. An image of a partial area.
  • the indication information is triggered according to at least one of the following operations: a tap operation, a gesture operation, and a slide operation.
  • a set gesture can also be performed on the terminal device to generate a gesture operation, such as shaking the device or making a gesture on the screen.
  • the capturing a screen image according to the instruction information includes: determining a capturing area according to the instruction information, and capturing a screen image corresponding to the capturing area.
  • the interception area can be determined according to the instruction information. For example, the coordinate information of the area is determined according to the instruction information, and the area coordinates are determined by obtaining the center point of the area. Then, the screen image is intercepted in the interception area. As shown in the screen diagram on the left in FIG. 2, the clicked position can be obtained from the instruction information, the position can be used as the center of the circular area, and the radius can be set to determine the circular area, and the image of the circular area can be captured.
  • Another example is to send an instruction message by sliding on the screen, and determine the corresponding area according to the sliding coordinates. The sliding area may be irregular. You can adjust the sliding area to the corresponding circle, square, triangle and other areas, and then capture the image data.
  • the system kernel includes: an input device and an image processing device, where the input device is used to detect an input and the image processing device is used to perform image-related processing.
  • the image processing device is a GPU (Graphics Processing Unit).
  • the operating system also includes: an input processing module, a window manager, and an image synthesizer, wherein the input processing module is used to process input events, and the window manager is used to manage interface windows, such as positioning a window that instructs to capture an image, etc. Is used to synthesize images.
  • an input event can be detected by the input device, and then the input event is transmitted to the input processing module.
  • the input processing module is used to recognize the gesture according to the input event, and input the coordinates (topx, topy, width, height) of the area delineated by the gesture to the window manager.
  • the window manager may determine and output window information according to information such as gestures and area coordinates, for example, a window that can be determined by a gesture, such as a window currently running an application interface, and then output a layer identifier corresponding to the window and the image.
  • the coordinates and the like of the layer interception area are given to the image synthesizer as window information.
  • the image synthesizer transmits the window information to the image processing device to intercept the image, and then returns the image synthesizer to synthesize the image data.
  • the image of the specified area is read from the GPU and fed back to the image synthesizer to generate image data in the corresponding format.
  • the image data can also be returned to the window manager for processing such as display.
  • step 104 classification information of the image data is identified, and the content in the image can be identified, and then the classification information is determined according to the content.
  • a classifier is trained, and the corresponding category of image data is identified based on the classifier, so that the classifier can be used to identify classification information corresponding to the content contained in the image data.
  • the classifier can also be called a classification model, a data set used for classification, and the like.
  • the classifier is used to identify the category of the content contained in the image.
  • the classifier can be obtained based on the training of the data model.
  • An image may be input into a classifier, and the classifier may output classification information of the image, where the classification information may have one or more categories, and the category is a category to which the content contained in the image data belongs.
  • the image data corresponding to the circular area intercepted by the screen can be identified as clothes, tops, T-shirts and other categories.
  • a classifier is trained based on an image database and a Convolutional Neural Network (CNN) model.
  • the image database can store image data obtained from a terminal device, a network, and the like, and classification information of the content contained in the image data. And so on, thereby training the convolutional neural network model to obtain a classifier, which can recognize the classification information of the content contained in the image.
  • CNN Convolutional Neural Network
  • the using the classifier to identify the classification information corresponding to the content contained in the image data includes: using the classifier to classify the image data to determine a classification result of the content contained in the image data Vector, using the classification result vector as classification information.
  • the image data can be input to a classifier, and the classifier performs classification processing on the image, and then outputs a classification result vector of the content contained in the image data.
  • One or more classification result vectors can be output, and the classification result vector is used as classification information.
  • FIG. 5 shows a processing diagram of a classifier.
  • Four channels R, G, B, and A can be extracted from the image data as input to the classifier.
  • the R channel is a red space channel
  • G is a green space channel
  • B is a blue space channel
  • A is an alpha space, that is, transparency / opacity, which are used as opacity parameters.
  • the data is input to the full link layer after being processed by one or more convolutional layers, and then the full link layer is used to determine the data such as the probability of each classification, and then the data such as the probability of each classification is converted into a classification result vector based on the softmax layer.
  • the corresponding classification result vector can be generated based on the probabilities of different classification results, or the probability of each classification result can be integrated into one classification result vector.
  • the classification category includes clothing, food, scenery, text, etc., and a corresponding classification result vector can be generated based on these categories and the probability of each category to which the image data belongs.
  • classification information of image data is obtained through processing by a classifier.
  • the classification information may be a first-level classification or an N-level classification.
  • N is a positive integer greater than 1, which may be determined according to actual requirements.
  • an image of a circular area is taken, and the corresponding recognition result may be clothes in a first-class classification, tops in a second-class classification, or T-shirts in a third-class classification.
  • the second-level classification to the N-level classification can be obtained through multiple processes of the network model such as the convolutional layer, the full link layer, and the softmax layer.
  • the classification information of the second-level or N-level classification can be obtained.
  • the classification result vector is obtained through the softmax layer processing, and a first-level classification is obtained (such as clothing probability, landscape probability, person probability, text probability, etc.), and N-level classification (probability of clothing and its type, pants and its type) Probability, the probability of the socks and their clock coming, .
  • the classifier obtained through training can quickly determine the classification information of the image data, and the image data and its classification information can also be used as training data after use, which is convenient for subsequent optimization of the classifier.
  • the corresponding sharing operation information may be determined according to the classification information in step 106.
  • the sharing information of the image of the corresponding category may be determined.
  • the sharing operation information is related information for publishing the image data, such as including sharing Image data software and information on operations performed.
  • the classification information may be analyzed according to a data analyzer to determine sharing operation information of the image data.
  • the data analyzer can be obtained based on the user's usage habit information and the like, so that the classification information is input to the data analyzer for processing, and then the sharing operation information of the image data can be output.
  • the data analyzer may also be called a data analyzer model, a data set used for analysis, and the like.
  • the data analyzer is used to determine image sharing operation information and may be obtained based on the data model training.
  • the analyzing the classification information according to a data analyzer to determine the sharing operation information of the image data includes: obtaining usage habits information, and converting the usage habits information into usage habits Vector; input the usage habit vector and classification result vector into a data analyzer for analysis, and determine sharing operation information of the image data.
  • the sharing of images can also be determined based on user habits, so the user's usage habits information can be collected in advance, such as programs that users share when acquiring different images, and operations that users perform in different programs, such as in shopping programs Search for clothes, share selfies in instant messaging programs, query travel destination information in travel programs, and more.
  • the usage habit information can also be converted into a usage habit vector.
  • the program in the usage habit vector is associated with the shared information, and a vector is generated for the shared information according to the category.
  • the category in the vector is 1, and the other categories are 0.
  • the category vector corresponding to the sharing information of each program is used as the usage habit vector, and the usage habit vector and the classification result vector are input into the data analyzer, and through the analysis of the data analyzer, the sharing operation information of the image data can be input.
  • the data analyzer can be obtained by training according to various analysis models, for example, training the data analyzer by using a multi-layer neural network (Multi-layer Perceptron, MLP) model.
  • the sharing operation information includes program information and operation information, where the program information is information of a program sharing the image data, such as a program identifier, a program name, etc., and the operation information is information for performing a sharing operation on the image data, such as searching, chatting, etc. Post operation.
  • the operation information includes a sharing type and a sharing content, wherein the sharing type is a page type shared in the program, such as a search page, an information publishing page, a chat page, etc., and the sharing content is a content corresponding to the image data, such as an image identifier, an image Store address, etc.
  • the sharing type is a page type shared in the program, such as a search page, an information publishing page, a chat page, etc.
  • the sharing content is a content corresponding to the image data, such as an image identifier, an image Store address, etc.
  • the data analyzer is trained based on the MLP model.
  • the habit vector and the classification result vector are input into a data analyzer, and then the data analyzer is used for processing, and the corresponding sharing operation information is input.
  • the format of the sharing operation information is ⁇ program, sharing type, sharing content ⁇ , according to the input, a sharing operation information can be obtained, such as the startup program ⁇ ⁇ , the sharing type ⁇ search ⁇ , and the sharing content ⁇ short skirt image ⁇ ; Program ⁇ ⁇ , sharing type ⁇ positioning ⁇ , sharing content ⁇ ⁇ ; launching the program ⁇ WeChat ⁇ , sharing type ⁇ send a circle of friends ⁇ , sharing content ⁇ roast duck image ⁇ , etc.
  • the classifier may determine the classification information of the image, and the data analyzer may analyze the sharing operation information of the image data.
  • the above-mentioned classifier and data analyzer may be separately trained or combined into a data processor, or Split into other processors, etc., or use other data processors, data processing collections, processing models, etc. instead.
  • a mathematical model is a scientific or engineering model constructed using mathematical logic methods and mathematical language.
  • a mathematical model is a kind of mathematics that is expressed in a general or approximate manner by referring to the characteristics or quantity dependence of a certain system of things. Structure, this kind of mathematical structure is a purely relational structure of some kind of system drawn with the help of mathematical symbols.
  • the mathematical model can be one or a set of algebraic equations, differential equations, difference equations, integral equations, or statistical equations and combinations thereof. These equations can be used to quantitatively or qualitatively describe the interrelationship or causality between the variables of the system.
  • mathematical models described by equations there are models described by other mathematical tools, such as algebra, geometry, topology, and mathematical logic. Mathematical models describe the behavior and characteristics of the system rather than the actual structure of the system.
  • the above usage habits information in the embodiment of the present application can be uploaded to the server, so that the server can train the data analyzer based on the usage habits information of each user, and can also add the sharing operation information to the usage habits information, thereby updating the training set of the data analyzer. , Through training to improve the accuracy of the data analyzer.
  • a corresponding program may be called according to the sharing operation information in step 108, and the image data is released using the program.
  • a program to be called may be determined according to the sharing operation information, and then the image data is published in the program, such as searching for products in the image data, sharing the image data in a circle of friends, and sending the image data to a friend.
  • the sharing operation information may include one or more program information and its operation information, that is, the sharing operation information may recommend one or more programs to the user for selection, so the user may also receive a selection instruction of the user, and select the program according to the selection instruction. This image data is then published in the program.
  • the calling a corresponding program according to the sharing operation information and using the program to publish the image data includes: calling the corresponding program according to the program information, and according to the operation information in the The program loads the image data; and issues the image data according to a release instruction.
  • the user's publishing instructions such as sending instructions, querying, and adding editing information, etc., can then publish the image data according to the publishing instructions to complete the sharing of the image data.
  • loading the image data in the program according to the operation information includes: starting a corresponding page in the program according to the sharing type; loading the image data in the page according to the sharing content .
  • the user is interested in a piece of clothing during the process of using the terminal device, and may issue an instruction message to delimit a capture area, and then capture image data in the capture area.
  • This image data is classified to determine the classification information as clothes, and then the analysis operation is used to obtain the sharing operation information as ⁇ immediate program, send circle of friends, clothes image ⁇ , so that the corresponding page of the circle of friends of the instant messaging program can be started, and then on this page Load the image data of clothes, and then the user can edit the corresponding information in this page, as shown in the terminal interface on the right side of Figure 2, and then click the release control to perform the release operation, and share the clothes image to other users.
  • the user is interested in a piece of clothing during the process of using the terminal device, and may issue an instruction message to delimit a capture area, and then capture image data in the capture area.
  • This image data is classified and processed to determine the classification information as T-shirts, and then the sharing operation information is obtained through analysis and processing including: ⁇ immediate communication program, send circle of friends, T-shirt image ⁇ , ⁇ shopping program, search, T-shirt image ⁇ , ⁇ immediate Communication programs, send to friends, T-shirt images ⁇ , etc., users choose to share in the shopping program, so they can call the shopping program and start the corresponding search page, in which the image data can be loaded for searching, or T Search the shirt and get the corresponding search results as shown in the terminal interface on the right in Figure 3.
  • content classification can be performed after the image is intercepted.
  • the classification information based on the classifier is used to identify the content in the image.
  • the classifier can be obtained based on models such as CNN to intelligently classify the image on the terminal device.
  • users can also modify the classification results, for example, the images are classified as clothes, and users can also add search information such as T-shirts when searching in the shopping program, so the correction information can also be uploaded to the server as training data for subsequent adjustments. Classifier to improve the accuracy of classification.
  • FIG. 7 a flowchart of steps of an embodiment of a data processing method according to the present application is shown, which specifically includes the following steps:
  • Step 702 Acquire image data.
  • the terminal device can obtain the image data to be shared in various ways, for example, it can be downloaded from the network, it can be obtained locally, it can be obtained by taking pictures, it can also be downloaded from the program, or it can be obtained by capturing the screen image.
  • the image data can be obtained according to various instructions and the sharing function can be started.
  • Step 704 Identify classification information of the image data, and determine corresponding sharing operation information according to the classification information.
  • the image data to be shared it can identify the classification information of the content contained in the image data. For example, an image of a piece of clothing is identified as classification information such as clothes, T-shirts, skirts, etc., and a landscape image is identified as mountains, Classified information on water, Guilin, Mount Fuji, Lijiang, etc. It is also possible to determine the sharing operation information of the image data based on the classification information, such as a program for sharing images, a sharing operation to be performed, and the like.
  • classification information such as clothes, T-shirts, skirts, etc.
  • a landscape image is identified as mountains, Classified information on water, Guilin, Mount Fuji, Lijiang, etc.
  • Step 706 Call a corresponding program according to the sharing operation information, and use the program to publish the image data.
  • a corresponding program can be called according to the sharing operation information, and then a corresponding page is launched in the program, the image data is loaded in the page, and the image data is released after the user instructs.
  • search for clothes, T-shirts, skirts, etc. in the shopping program or send it to friends in an instant messaging program to discuss whether it is worth buying, etc., or share landscape images in a circle of friends or groups of friends, or travel
  • the program searches tourist information such as Guilin, Mount Fuji, Lijiang, etc.
  • the image data to be shared can be obtained, and then the classification information of the image data can be determined, such as the classification of the content in the image, and the corresponding sharing operation information can be determined based on the classification information, and then the image can be published according to the sharing operation information.
  • Data which can automatically determine the sharing operation of image data and perform sharing, and improve the efficiency of information processing.
  • FIG. 8 a flowchart of steps in an embodiment of a data processing method according to the present application is shown, which specifically includes the following steps:
  • Step 802 Capture a screen image according to the instruction information to generate corresponding image data.
  • the user is using the terminal device. If you are interested in the content displayed on the screen, you can send out instruction information in various ways, such as clicking, swiping, gesture operation, etc., and then take the screen image according to the instruction information to generate corresponding image data.
  • the interception area may be determined according to the instruction information, and a screen image corresponding to the interception area is intercepted.
  • the interception area of the image to be intercepted can be determined according to the instruction information, such as the circular area is determined according to the clicked position, the polygonal area such as circle, triangle, square, etc. is determined according to the sliding track, and then the screen image in the interception area is intercepted to generate an image. data.
  • Step 804 Use a classifier to identify classification information corresponding to the content contained in the image data.
  • the image data can be input into a classifier for classification processing, so that the classifier can determine classification information of corresponding content based on the content contained in the image data, such as classification information such as clothes, skirts, landscapes, and Guilin.
  • the using the classifier to identify the classification information corresponding to the content contained in the image data includes: using the classifier to classify the image data, determining a classification result vector of the content contained in the image data, and classifying the classification result.
  • Vectors are used as classification information.
  • the image data can be input into a classifier for classification processing.
  • the classifier can determine the classification of the content contained in the image data, generate a corresponding classification result vector, and output it.
  • the classification result vector can be used as classification information.
  • the classification result vector can be determined according to the probability of each category to which the image data belongs. For example, if 100 classifications are set, the classifier can determine the probability that the image data belongs to each category, thereby generating a 100-dimensional vector. The dimension corresponds to a category, and the value of the dimension is the probability value of the image data belonging to the category, thereby generating a corresponding classification result vector.
  • Step 806 Analyze the classification information according to a data analyzer to determine sharing operation information of the image data.
  • the classification information is input into the data analyzer, and the classification information is analyzed and processed by the data analyzer to obtain the sharing operation information of the image data corresponding to the classification information. For example, it is determined that the general clothes are searched in the shopping program and the scenery is searched in the travel program. , Text editing in office programs, animated pictures published in instant messaging programs, etc.
  • the usage habit information can be obtained, and the usage habit information is converted into a usage habit vector; the usage habit vector and the classification result vector are input into a data analyzer for analysis to determine the sharing operation information of the image data.
  • the embodiment of the present application can determine the sharing operation of the image data based on the user's habits, so the usage habits information can be collected, the usage habits information can be converted into the usage habits vector, and then the usage habits vector and the classification result vector are input to the data analyzer.
  • the database can determine the sharing operation information of the image data based on the classification result vector and the usage habit vector of the image data.
  • the sharing operation information includes program information and operation information, and the operation information includes sharing type and sharing content.
  • Step 808 Call a corresponding program according to the program information, and load the image data in the program according to the operation information.
  • the shared operation information may include operation information corresponding to multiple programs. Therefore, the user may select a program as the called program, then call the program according to the program information, and then load the image data into the program according to the operation information.
  • the corresponding page may be started in the program according to the sharing type; and the image data is loaded in the page according to the sharing content.
  • the user can also add the required editing information, such as text data of friends circle, Weibo, or add other image data, etc.
  • the image data is published on the page.
  • Step 810 Publish the image data according to a publishing instruction.
  • the above-mentioned functions of image acquisition, classification, and sharing can be provided in the operating system, so that users can share various information at any time according to their needs during the operation of the terminal device, and conveniently implement the search, release, and positioning of the information. Share operations like queries. Therefore, the corresponding functional interface API (Application Programming Interface) can be set in the operating system, so that after the gesture operation is detected, the instruction information can be generated, and then the functional interface is called, and the image data can be implemented through the functional interface. Determine the classification information, share the operation information, and call the program for publishing.
  • Application Programming Interface Application Programming Interface
  • the following modules may be provided in the operating system of the terminal device: a functional interface 902 and a processing module 904, where the functional interface 902 may include various interfaces, such as a gesture recognition interface, an image capture interface, Image recognition interface and interface to share the required calling programs.
  • the processing module is constructed according to the corresponding processing logic, and may include an image processing unit 9042, an image capturing unit 9044, a content classification unit 9046, and an image sharing unit 9048. Among them, the operation of the image processing unit 9042 for a click, a swipe gesture, etc. may be recognized based on a window manager. The gesture and the information of the window area where the gesture acts are then input to the image interception unit 9044.
  • the image interception unit 9044 uses the image synthesizer to call a processing device such as a GPU to intercept the image of the corresponding window area to generate image data. Then the image data can be input into the content classification unit 9046, and the corresponding classification information can be obtained through the processing of the classifier. The classification information is then input to the image sharing unit 9048.
  • the image sharing unit 9048 determines the sharing based on the user's usage habits. Operate the information, and call the corresponding program based on the application manager to share the image data. In order to learn the needs of users for content sharing through machine learning in content classification and user usage habits, it provides intelligent and convenient user experience.
  • the screen image can be captured by gestures, and then the classification operation and analysis of the sharing operation can be used to determine the sharing operation information, where the sharing operation information can display a list of sharing operations for
  • the determination program includes an instant communication program, a travel program, a map program, etc., and then can receive the user's selection instruction to determine to start the travel program, and automatically search the image corresponding content in the travel program. "Tianmushan" tourism product information is presented to users.
  • the usage habits information such as the user's choices can be fed back to the system.
  • the user's behavior is automatically learned to train classifiers and data analyzers. After multiple learning, the next time users share Similar content, for example, the classification information of the corresponding content of the image is identified as "Huangshan” or “Scenic Area”, etc.
  • the travel program can be automatically started to search for the content "Huangshan” or “Landscape” Area ", etc. to obtain the travel product information of the corresponding location and present it to the user, without the user having to choose again, to meet the user's needs, and to reflect the" intelligence "of the user's intelligence and convenience.
  • this embodiment further provides a data processing apparatus, which can be applied to electronic devices such as a terminal device and a server.
  • FIG. 10 a structural block diagram of an embodiment of a data processing apparatus according to the present application is shown, which may specifically include the following modules:
  • the obtaining module 1002 is configured to obtain image data.
  • the identification module 1004 is configured to identify classification information of the image data, and determine corresponding sharing operation information according to the classification information.
  • the sharing module 1006 is configured to call a corresponding program according to the sharing operation information, and use the program to release the image data.
  • the image data to be shared can be obtained, and then the classification information of the image data can be determined, such as the classification of the content in the image, and the corresponding sharing operation information can be determined based on the classification information, and then the image can be published according to the sharing operation information Data, which can automatically determine the sharing operation of image data and perform sharing, and improve the efficiency of information processing.
  • the classification information of the image data can be determined, such as the classification of the content in the image
  • the corresponding sharing operation information can be determined based on the classification information
  • the image can be published according to the sharing operation information Data, which can automatically determine the sharing operation of image data and perform sharing, and improve the efficiency of information processing.
  • FIG. 11 a structural block diagram of another embodiment of a data processing apparatus according to the present application is shown, which may specifically include the following modules:
  • the obtaining module 1002 is configured to obtain image data.
  • the identification module 1004 is configured to identify classification information of the image data, and determine corresponding sharing operation information according to the classification information.
  • the sharing module 1006 is configured to call a corresponding program according to the sharing operation information, and use the program to release the image data.
  • a feedback module 1008 is configured to add the sharing operation information to the usage habits information.
  • the identification module 1004 includes a classification submodule 10042 and a sharing operation submodule 10044, where:
  • the classification submodule 10042 is configured to use a classifier to identify classification information corresponding to content contained in the image data.
  • the sharing operation submodule 10044 is configured to analyze the classification information according to a data analyzer to determine sharing operation information of the image data.
  • the classification submodule 10042 is configured to use a classifier to perform classification processing on the image data, determine a classification result vector of content contained in the image data, and use the classification result vector as classification information.
  • the sharing operation submodule 10044 is configured to obtain usage habit information and convert the usage habit information into a usage habit vector; input the usage habit vector and classification result vector into a data analyzer for analysis to determine the image Data sharing operation information
  • the sharing operation information includes program information and operation information.
  • the sharing module 1006 includes: a program calling submodule 10062 and a data sharing submodule 10064, where:
  • the program calling submodule 10062 is configured to call a corresponding program according to the program information, and load the image data in the program according to the operation information.
  • the data sharing submodule 10064 is configured to release the image data according to a release instruction.
  • the operation information includes sharing type and sharing content.
  • the program calls a sub-module 10062 for starting a corresponding page in the program according to the sharing type; and loading the image data in the page according to the sharing content.
  • the obtaining module 1002 is configured to intercept a screen image according to the instruction information and generate corresponding image data.
  • the obtaining module 1002 is configured to determine an interception area according to the instruction information, and intercept a screen image corresponding to the interception area.
  • the indication information is triggered according to at least one of the following operations: a tap operation, a gesture operation, and a slide operation.
  • the above-mentioned functions of image acquisition, classification, and sharing can be provided in the operating system, so that users can share various information at any time according to their needs during the operation of the terminal device, and conveniently implement the search, release, and positioning of the information. Share operations like queries. Therefore, the corresponding functional interface API (Application Programming Interface) can be set in the operating system, so that after the gesture operation is detected, the instruction information can be generated, and then the functional interface is called, and the image data can be implemented through the functional interface. Determine the classification information, share the operation information, and call the program for publishing.
  • Application Programming Interface Application Programming Interface
  • content classification can be performed after the image is intercepted.
  • the classification information based on the classifier is used to identify the content in the image.
  • the classifier can be obtained based on models such as CNN to intelligently classify the image on the terminal device.
  • users can also modify the classification results, for example, the images are classified as clothes, and users can also add search information such as T-shirts when searching in the shopping program, so the correction information can also be uploaded to the server as training data for subsequent adjustments. Classifier to improve the accuracy of classification.
  • An embodiment of the present application further provides a non-volatile readable storage medium.
  • the storage medium stores one or more modules.
  • the terminal can make the terminal
  • the device executes instructions of each method step in the embodiments of the present application.
  • the electronic device includes devices such as a terminal device and a server (cluster).
  • a terminal device refers to a device with a terminal operating system. These devices can support audio, video, and data functions, including mobile terminals such as smartphones, tablets, and wearable devices. TV, personal computer, etc. Operating systems such as AliOS, IOS, Android, Windows, etc.
  • FIG. 12 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
  • the electronic device may include a terminal device, a server (cluster), and other devices.
  • the electronic device may include an input device 120, a processor 121, an output device 122, a memory 123, and at least one communication bus 124.
  • the communication bus 124 is used to implement a communication connection between the components.
  • the memory 123 may include high-speed RAM (Random Access Memory), and may also include non-volatile storage NVM (Non-Volatile Memory), such as at least one disk memory.
  • the memory 123 may store various programs. It is used to complete various processing functions and implement the method steps of this embodiment.
  • the processor 121 may be, for example, a central processing unit (CPU), an application specific integrated circuit (ASIC), a digital signal processor (DSP), a digital signal processing device (DSPD), and programmable logic.
  • a device (PLD), a field-programmable gate array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components are implemented.
  • the processor 121 is coupled to the input device 120 and output device 122 through a wired or wireless connection.
  • the input device 120 may include multiple input devices, for example, it may include at least one of a user-oriented user interface, a device-oriented device interface, a software programmable interface, a camera, and a sensor.
  • the device-oriented device interface may be a wired interface for data transmission between the device and a hardware insertion interface (for example, USB interface, serial port, etc.) for data transmission between the device and the device.
  • the user-oriented user interface may be, for example, a user-oriented control button, a voice input device for receiving voice input, and a touch sensing device (for example, a touch screen with a touch sensing function, Control board, etc.);
  • the programmable interface of the software can be, for example, an entry for a user to edit or modify a program, such as an input pin interface or input interface of a chip;
  • the above-mentioned transceiver can have Communication function RF transceiver chip, baseband processing chip and transceiver antenna. Audio input devices such as a microphone can receive voice data.
  • the output device 122 may include output devices such as a display and an audio.
  • the processor of the device includes functions for executing each module of the network management device in each electronic device.
  • FIG. 13 is a schematic diagram of a hardware structure of an electronic device according to another embodiment of the present application.
  • FIG. 13 is a specific embodiment of the implementation process of FIG. 12.
  • the electronic device in this embodiment includes a processor 131 and a memory 132.
  • the processor 131 executes the computer program code stored in the memory 132 to implement the data processing method of FIGS. 1 to 9 in the foregoing embodiment.
  • the memory 132 is configured to store various types of data to support operation at the electronic device. Examples of such data include instructions for any application or method for operating on an electronic device, such as messages, pictures, videos, etc.
  • the memory 132 may include a random access memory RAM, and may also include a non-volatile memory NVM, such as at least one disk memory.
  • the processor 131 is disposed in the processing component 130.
  • the electronic device may further include a communication component 133, a power component 134, a multimedia component 135, an audio component 136, an input / output interface 137, and / or a sensor component 138.
  • the specific components and the like included in the device are set according to actual requirements, which is not limited in this embodiment.
  • the processing component 130 generally controls the overall operation of the device.
  • the processing component 130 may include one or more processors 131 to execute instructions to complete all or part of the steps of the methods in FIG. 1 to FIG. 9.
  • the processing component 130 may include one or more modules to facilitate the interaction between the processing component 130 and other components.
  • the processing component 130 may include a multimedia module to facilitate the interaction between the multimedia component 135 and the processing component 130.
  • the power supply assembly 134 provides power to various components of the device.
  • the power component 134 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for electronic devices.
  • the multimedia component 135 includes a display screen that provides an output interface between the device and the user.
  • the display screen may include a liquid crystal display (LCD) and a touch panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive an input signal from a user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
  • the audio component 136 is configured to output and / or input audio signals.
  • the audio component 136 includes a microphone (MIC) that is configured to receive external audio signals when the device is in an operating mode, such as a voice recognition mode.
  • the received audio signal may be further stored in the memory 132 or transmitted via the communication component 133.
  • the audio component 136 further includes a speaker for outputting audio signals.
  • the input / output interface 137 provides an interface between the processing component 130 and a peripheral interface module.
  • the peripheral interface module may be a click wheel, a button, or the like. These buttons can include, but are not limited to: a volume button, a start button, and a lock button.
  • the sensor component 138 includes one or more sensors for providing status assessments of various aspects of the device.
  • the sensor component 138 may detect the on / off state of the device, the relative positioning of the components, the presence or absence of user contact with the device.
  • the sensor component 138 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact, including detecting the distance between the user and the device.
  • the sensor component 138 may further include a camera and the like.
  • the communication component 133 is configured to facilitate wired or wireless communication between the electronic device and other electronic devices.
  • the electronic device can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • the electronic device may include a SIM card slot.
  • the SIM card slot is used to insert a SIM card, so that the device can log in to the GPRS network and establish communication with the server through the Internet.
  • the communication component 133, the audio component 136, the input / output interface 137, and the sensor component 138 involved in the embodiment of FIG. 13 can all be implemented as the input device in the embodiment of FIG.
  • An embodiment of the present application provides an electronic device, including: a processor; and a memory, which stores executable code, and when the executable code is executed, causes the processor to execute the same as that in the embodiment of the present application Or more of said data processing methods.
  • An embodiment of the present application further provides an operating system for an electronic device.
  • the operating system of the terminal device includes a processing unit 1402 and a sharing unit 1404.
  • the processing unit 1402 acquires image data; identifies classification information of the image data, and determines corresponding sharing operation information according to the classification information.
  • the sharing unit 1404 calls a corresponding program according to the sharing operation information, and uses the program to publish the image data.
  • the description is relatively simple. For the relevant part, refer to the description of the method embodiment.
  • the embodiments of the embodiments of the present application may be provided as a method, an apparatus, or a computer program product. Therefore, the embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Moreover, the embodiments of the present application may take the form of a computer program product implemented on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) containing computer-usable program code.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions executed by the processor of the computer or other programmable data processing terminal device Means are generated for implementing the functions specified in one or more flowcharts and / or one or more blocks of the block diagrams.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing terminal device to work in a specific manner, such that the instructions stored in the computer-readable memory produce a manufactured article including the instruction means, the The instruction means implements the functions specified in one or more flowcharts and / or one or more blocks of the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing terminal device, so that a series of operation steps can be performed on the computer or other programmable terminal device to produce a computer-implemented process, so that the computer or other programmable terminal device can
  • the instructions executed on the steps provide steps for implementing the functions specified in one or more of the flowcharts and / or one or more of the block diagrams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Biophysics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本申请实施例提供了一种数据处理方法、装置、电子设备和可读介质,以提高信息的处理效率。所述的方法包括:获取图像数据;识别图像数据的分类信息,并依据所述分类信息确定对应的分享操作信息;依据所述分享操作信息调用对应的程序,采用所述程序发布所述图像数据。能够自动确定对图像数据的分享操作并执行分享,提高信息的处理效率。

Description

一种数据处理方法、装置、电子设备和可读介质
本申请要求2018年06月07日递交的申请号为201810581577.4、发明名称为“一种数据处理方法、装置、电子设备和可读介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别是涉及一种数据处理方法,一种数据处理装置,一种电子设备以及一种机器可读介质。
背景技术
随着终端技术的发展,越来越多的用户使用终端设备执行所需的操作,例如通过浏览器查询信息,通过社交软件进行信息分享和交互,通过即时通信软件进行通信等。
用户在浏览信息的过程中,有时遇到感兴趣的图片等信息后,可存储到终端设备中,然后再开启相应的软件程序进行分享,如发送给通信程序中的好友,又如在购物程序中搜索相应商品等。
但是,这种分享往往需要用户自己确定执行分享的程序,如关闭当前程序后开启需要分享的程序,又如在当前程序中选择分享选项,在查询分享的程序等,信息处理效率较低。
发明内容
本申请实施例提供一种数据处理方法,以提高信息的处理效率。
相应的,本申请实施例还提供了一种数据处理装置、一种电子设备和一种机器可读介质,用以保证上述方法的实现及应用。
为了解决上述问题,本申请实施例公开了一种数据处理方法,包括:获取图像数据;识别图像数据的分类信息,并依据所述分类信息确定对应的分享操作信息;依据所述分享操作信息调用对应的程序,采用所述程序发布所述图像数据。
本申请实施例还公开了一种数据处理装置,包括:获取模块,用于获取图像数据;识别模块,用于识别图像数据的分类信息,并依据所述分类信息确定对应的分享操作信息;分享模块,用于依据所述分享操作信息调用对应的程序,采用所述程序发布所述图像数据。
本申请实施例还公开了一种电子设备,包括:处理器;和存储器,其上存储有可执行代码,当所述可执行代码被执行时,使得所述处理器执行如本申请实施例中一个或多个所述的数据处理方法。
本申请实施例还公开了一个或多个机器可读介质,其上存储有可执行代码,当所述可执行代码被执行时,使得处理器执行如本申请实施例中一个或多个所述的数据处理方法。
本申请实施例还公开了一种用于电子设备的操作系统,包括:处理单元,获取图像数据;识别图像数据的分类信息,并依据所述分类信息确定对应的分享操作信息;分享单元,依据所述分享操作信息调用对应的程序,采用所述程序发布所述图像数据。
与现有技术相比,本申请实施例包括以下优点:
在本申请实施例中,可获取需要分享的图像数据,然后确定该图像数据的分类信息,如确定图像中内容的分类,并依据该分类信息确定对应的分享操作信息,然后可依据分享操作信息发布所述图像数据,从而能够自动确定对图像数据的分享操作并执行分享,提高信息的处理效率。
附图说明
图1是本申请实施例中图像分享处理的示意图;
图2是本申请实施例中一种分享处理的界面示意图;
图3是本申请实施例中另一种分享处理的界面示意图;
图4是本申请实施例中一种图像截取示例的示意图;
图5是本申请实施例中一种分类器的处理示意图;
图6是本申请实施例中一种数据分析器的处理示意图;
图7是本申请的一种数据处理方法实施例的步骤流程图;
图8是本申请的另一种数据处理方法实施例的步骤流程图;
图9是本申请实施例中一种系统模块的结构示意图;
图10是本申请的一种数据处理装置实施例的结构框图;
图11是本申请的另一种数据处理装置实施例的结构框图;
图12是本申请一实施例提供的电子设备的硬件结构示意图;
图13是本申请另一实施例提供的电子设备的硬件结构示意图;
图14是本申请实施例的一种操作系统的结构示意图。
具体实施方式
为使本申请的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本申请作进一步详细的说明。
本申请实施例提出一种数据处理方法,对获取的图像数据能够自动识别其对应的分类信息,其中,可依据图像数据中包含的内容识别分类信息,再依据分类信息确定对应的分享操作信息,依据分享操作信息发布所述图像数据,从而能够自动确定对图像数据的分享操作并执行分享,提高信息的处理效率。本申请实施例中,分享指的是共同使用,图像数据可与其他应用程序共同使用,还可展示给其他用户观看,例如在购物程序中搜索图像中的商品,又如在通信程序中发送给好友等。
参照图1所示,为本申请实施例中图像分享处理的示意图。
在步骤102中可获取图像数据。其中图像数据的获取来源有多种,例如可从网络中下载,可从本地获取,可拍照获取,也可从程序中下载或者截取屏幕的图像来获取。其中,截取方式获取的图像数据的步骤包括:依据指示信息截取屏幕图像,生成对应的图像数据。用户在使用终端设备时,若对终端设备显示的内容感兴趣,可通过各种操作发出指示信息,然后依据该指示信息截取屏幕的图像,得到相应的图像数据,如截取整个屏幕的图像或者屏幕中部分区域的图像。所述指示信息依据以下至少一种操作触发包括:点击操作、手势操作、滑动操作。可点击、双击屏幕等生成点击操作。也可在终端设备上执行设定的手势生成手势操作如摇动设备、或在屏幕上作手势等。还可在屏幕上滑动来生成滑动操作,如滑动圈出截取屏幕图像的区域等。
一个示例中,所述依据指示信息截取屏幕图像,包括:依据指示信息确定截取区域,截取所述截取区域对应的屏幕图像。依据指示信息可确定出截取区域,如依据指示信息确定区域的坐标信息,又如获取区域中心点来确定区域坐标等,然后在该截取区域内截取屏幕图像。如图2中左侧的屏幕示意图,可从指示信息中获取点击的位置,将该位置作为圆形区域的圆心,并设置半径确定圆形区域,截取该圆形区域的图像。又如在屏幕上滑动发出指示信息,依据滑动坐标确定对应的区域,其中滑动的区域可能不规则,可将滑动区域调整为对应的圆形、方形、三角形等各种区域,然后截取图像数据。
如图4所示的一种图像截取示例中,系统内核包括:输入设备和图像处理设备,其中,输入设备用于检测输入,图像处理设备用于执行图像相关的处理,例如图像处理设备为GPU(Graphics Processing Unit,图形处理器)。该操作系统还包括:输入处理模块、 窗口管理器和图像合成器,其中,输入处理模块用于对输入事件进行处理,窗口管理器用于管理界面窗口,如定位指示截取图像的窗口等,图像合成器用于合成图像。在步骤402中可通过输入设备检测输入事件,然后将输入事件(input event)传输给输入处理模块。然后步骤404中采用输入处理模块依据输入事件识别手势,输入手势划定的区域坐标(topx,topy,width,height)给窗口管理器。在步骤406中窗口管理器可依据手势及区域坐标等信息确定窗口信息并输出,例如可确定手势作用的窗口,如当前运行应用界面的窗口等,然后输出该窗口对应的图层标识及该图层截取区域的坐标等作为窗口信息给图像合成器。在步骤408中图像合成器传输该窗口信息给图像处理设备截取图像,再返回图像合成器合成图像数据,例如从GPU中读取指定区域的图像,反馈给图像合成器生成对应格式的图像数据。还可将图像数据返回给窗口管理器进行显示等处理。
然后在步骤104中识别图像数据的分类信息,其中可识别出图像中的内容再依据内容确定分类信息。例如训练分类器,基于该分类器进行图像数据对应类别的识别,从而可采用分类器识别图像数据中包含的内容对应的分类信息。其中,分类器也可称为分类模型、用于分类的数据集合等,该分类器用于对图像中包含内容的类别进行识别,分类器可基于数据模型训练得到。可以将图像输入到分类器中,该分类器可输出该图像的分类信息,其中该分类信息中可一个或多个类别,类别为图像数据所包含内容所属的类别。例如对于图2中左侧的屏幕示意图,对其截取的圆形区域对应的图像数据,可识别其中包含的内容为衣服或上衣、T恤等类别。
一个示例中基于图像数据库和卷积神经网络(Convolutional Neural Network,CNN)模型训练分类器,其中,图像数据库可存储从终端设备、网络等获取的图像数据,以及该图像数据所包含内容的分类信息等,从而对卷积神经网络模型进行训练,得到分类器,该分类器可识别图像中包含内容的分类信息。
本申请一个可选实施例中,所述采用分类器识别图像数据中包含的内容对应的分类信息,包括:采用分类器对图像数据进行分类处理,确定所述图像数据中包含的内容的分类结果向量,将所述分类结果向量作为分类信息。可将图像数据输入分类器,分类器对图像进行分类处理,然后输出图像数据中包含的内容的分类结果向量,其中可输出一个或多个分类结果向量,将分类结果向量作为分类信息。
如图5所示为一种分类器的处理示意图,可从图像数据中提取R、G、B、A四个通道(channel)作为分类器的输入。其中,R通道为红色空间通道、G为绿色空间通道、B为蓝色空间通道、A为Alpha空间,也就是透明度/不透明度,用作不透明度参数。将 在步骤502中将上述数据输入到分类器后,可通过分类器中的卷积层、全链路层、softmax层等处理,其中softmax层可看做归一化层,上述4各通道的数据通过一个或多个卷积层处理后输入到全链路层,然后采用全链路层确定各分类的概率等数据,再基于softmax层将各分类的概率等数据转化为分类结果向量。其中可基于不同分类结果的概率生成相应的分类结果向量,也可将各分类结果的概率整合为一个分类结果向量。例如分类的类别包括服饰、美食、风景、文字等,则可基于这些类别以及图像数据所属各类别的概率生成相应的分类结果向量。
本申请实施例中,通过分类器处理得到图像数据分类信息,该分类信息可为一级分类,也可为N级分类,N为大于1的正整数,具体可依据实际需求确定。如图2中截取圆形区域的图像,相应识别结果可为一级分类的衣服、二级分类的上衣或三级分类的T恤等。其中,对于二级分类到N级分类,可通过上述卷积层、全链路层、softmax层等网络模型的多次处理得到,如将一次处理得到的一级分类信息以及图像数据再次输入该网络模型中,可得二级或N级分类的分类信息。例如通过softmax层处理得到分类结果向量,得到一级分类如(服装概率,风景概率,人物概率,文本概率……),又如得到N级分类(衣服及其种类的概率,裤子及其种类的概率,袜子及其钟来的概率,……)。
从而通过训练得到的分类器可快速的确定图像数据的分类信息,图像数据及其分类信息在使用后也可作为训练数据,便于后续优化分类器。
得到分类信息后在步骤106中可依据所述分类信息确定对应的分享操作信息,依据该分类信息可确定相应类别的图像的分享信息,分享操作信息为发布该图像数据的相关信息,如包括分享图像数据的软件以及执行的操作信息等。其中可依据数据分析器对所述分类信息进行分析,确定所述图像数据的分享操作信息。该数据分析器可基于用户的使用习惯信息等训练得到,从而将分类信息输入到数据分析器进行处理,然后可输出图像数据的分享操作信息。其中,数据分析器也可称为数据分析器模型、用于分析的数据集合等,该数据分析器器用于确定图像分享操作信息,可基于数据模型训练得到。
本申请一个可选实施例中,所述依据数据分析器对所述分类信息进行分析,确定所述图像数据的分享操作信息,包括:获取使用习惯信息,将所述使用习惯信息转换为使用习惯向量;将所述使用习惯向量和分类结果向量输入到数据分析器中进行分析,确定所述图像数据的分享操作信息。对于图像的分享还可基于用户习惯确定,因此可预先收集用户的使用习惯信息,如用户在获取不同图像时执行分享的程序,又如用户在不同程序中执行的操作等,如在购物程序中搜索衣服,在即时通信程序中分享自拍,在旅游程 序中查询旅游地信息等。还可将该使用习惯信息转换为使用习惯向量,如将使用习惯向量中程序和其中分享信息建立关联,对于分享信息按照类别生成向量,该向量中所属类别为1,其他类别为0,从而确定出每种程序对应分享信息的类别向量作为使用习惯向量,将使用习惯向量和分类结果向量输入到数据分析器中,通过数据分析器的分析,可输入该图像数据的分享操作信息。
其中,数据分析器可依据各种分析模型训练得到,例如通过多层神经网络(Multi-layer Perceptron,MLP)模型训练数据分析器。分享操作信息包括:程序信息、操作信息,其中程序信息为分享该图像数据的程序的信息,如程序标识、程序名称等,操作信息为对该图像数据执行分享操作的信息,如搜索、聊天等发布操作。所述操作信息包括分享类型和分享内容,其中分享类型为在程序中分享的页面类型,如搜索页面、信息发布页面、聊天页面等,分享内容为该图像数据对应的内容,如图像标识、图像存储地址等。
如图6所示的示例中,数据分析器基于MLP模型训练得到。在步骤602中将使用习惯向量和分类结果向量输入数据分析器,然后采用数据分析器进行处理,输入相应的分享操作信息。例如分享操作信息的格式为{程序,分享类型,分享内容},则依据输入可得一个分享操作信息如启动程序{淘宝},分享类型{搜索},分享内容{短裙图像};又如启动程序{高德},分享类型{定位},分享内容{富士山};启动程序{微信},分享类型{发朋友圈},分享内容{烤鸭图片}等。
本申请实施例中,分类器可确定图像的分类信息,数据分析器可分析对图像数据的分享操作信息,上述分类器和数据分析器可分别训练得到,也可组合为一个数据处理器,或者拆分为其他处理器等,或者采用其他的数据处理器、数据处理集合、处理模型等代替。其中,数学模型是运用数理逻辑方法和数学语言建构的科学或工程模型,数学模型是针对参照某种事物系统的特征或数量依存关系,采用数学语言,概括地或近似地表述出的一种数学结构,这种数学结构是借助于数学符号刻画出来的某种系统的纯关系结构。数学模型可以是一个或一组代数方程、微分方程、差分方程、积分方程或统计学方程及其组合,通过这些方程定量地或定性地描述系统各变量之间的相互关系或因果关系。除了用方程描述的数学模型外,还有用其他数学工具,如代数、几何、拓扑、数理逻辑等描述的模型。数学模型描述的是系统的行为和特征而不是系统的实际结构。
本申请实施例中上述使用习惯信息可上传给服务器,从而便于服务器基于各用户的使用习惯信息训练数据分析器,还可将分享操作信息添加到使用习惯信息中,从而更新数据分析器的训练集,通过训练提高数据分析器的准确性。
得到图像数据的分享操作信息后,可在步骤108中依据分享操作信息调用对应的程序,采用所述程序发布所述图像数据。其中,依据分享操作信息可确定需要调用的程序,然后在该程序中发布该图像数据,如搜索图像数据中的商品,在朋友圈中分享图像数据,将图像数据发送给好友等。其中,分享操作信息可包括一个或多个程序信息及其操作信息,即分享操作信息可推荐一个或多个程序给用户进行选择,因此还可接收用户的选择指示,依据该选择指示选择程序,然后在程序中发布该图像数据。
本申请一个可选实施例中,所述依据分享操作信息调用对应的程序,采用所述程序发布所述图像数据,包括:依据所述程序信息调用对应的程序,依据所述操作信息在所述程序中加载所述图像数据;依据发布指示发布所述图像数据。可以从程序信息中的程序标识、程序名称等确定需要调用的程序,然后调用该程序,再依据操作信息在程序中加载该图像数据,例如在程序中启动相应的页面加载图像数据,然后可接收用户的发布指示,如指示发送、查询以及添加编辑信息等,然后可依据发布指示发布图像数据,完成对图像数据的分享。
其中,所述依据操作信息在所述程序中加载所述图像数据,包括:依据所述分享类型在所述程序中启动对应的页面;依据所述分享内容在所述页面中加载所述图像数据。可以依据分享类型确定在程序中启动的页面,如搜索页面、聊天页面、朋友圈页面、微博编辑页面等,然后依据分型内容在该页面中加载所述图像数据,用户还可添加所需的编辑信息,如朋友圈、微博的文本数据或添加其他图像数据等,在编辑完后可在该页面中发布该图像数据。
如上述图2所示的示例中,用户在使用终端设备的过程中对一件衣服感兴趣,可以发出指示信息划定截取区域,然后在该截取区域中截取图像数据。该图像数据进行分类处理确定分类信息为衣服,然后通过分析处理得到分享操作信息为{即时通讯程序,发朋友圈,衣服图像},从而可启动即时通讯程序的朋友圈对应页面,然后在该页面中加载衣服的图像数据,然后用户可在该页面中编辑相应的信息,如图2右侧的终端界面所示,再点击发布控件执行发布操作,分享衣服图像给其他用户。
上述图3所示的示例中,用户在使用终端设备的过程中对一件衣服感兴趣,可以发出指示信息划定截取区域,然后在该截取区域中截取图像数据。该图像数据进行分类处理确定分类信息为T恤,然后通过分析处理得到分享操作信息包括:{即时通讯程序,发朋友圈,T恤图像}、{购物程序,搜索,T恤图像}、{即时通讯程序,发给好友,T恤图像}等,用户选择在购物程序中分享,因此可调用购物程序并启动相应的搜索页面,在该 搜索页面中可加载图像数据进行搜索,或者分类信息的T恤进行搜索,得到相应的搜索结果如图3中右侧的终端界面所示。
本申请实施例中,在截取图像后可进行内容分类,如基于分类器识别图像中内容的分类信息,该分类器可基于CNN等模型训练得到,从而在终端设备上对图像进行智能分类。并且,用户还可对分类结果进行修正,例如图像分类为衣服,用户在购物程序中搜索时还可添加搜索信息如T恤等,从而修正信息还可上传到服务器中,作为训练数据便于后续调整分类器,提高分类的准确性。
参照图7,示出了本申请一种数据处理方法实施例的步骤流程图,具体包括如下步骤:
步骤702,获取图像数据。
终端设备可通过各种方式获取需要分享的图像数据,例如可从网络中下载,可从本地获取,可拍照获取,也可从程序中下载或者截取屏幕的图像来获取。其中可依据各种指示信息获取图像数据以及启动分享功能。
步骤704,识别图像数据的分类信息,并依据所述分类信息确定对应的分享操作信息。
对于要分享的图像数据,可识别该图像数据中包含内容的分类信息,例如截取一种衣服的图像识别为衣服、T恤、短裙等分类信息,又如截取一张风景图像识别为山、水、桂林、富士山、丽江等分类信息。还可依据该分类信息确定对图像数据的分享操作信息,如分享图像的程序、所需执行的分享操作等。
步骤706,依据所述分享操作信息调用对应的程序,采用所述程序发布所述图像数据。
依据该分享操作信息可调用对应的程序,然后在该程序中启动相应的页面,在页面中加载该图像数据并在用户指示后发布该图像数据。如对于衣服、T恤、短裙等在购物程序中搜索、或者在即时通讯程序中发送给好友以讨论是否值得够买等,又如在朋友圈、好友群组中分享山水图像,或者在旅游程序中搜索桂林、富士山、丽江等位置的旅游信息等。
综上,可获取需要分享的图像数据,然后确定该图像数据的分类信息,如确定图像中内容的分类,并依据该分类信息确定对应的分享操作信息,然后可依据分享操作信息发布所述图像数据,从而能够自动确定对图像数据的分享操作并执行分享,提高信息的处理效率。
参照图8,示出了本申请一种数据处理方法实施例的步骤流程图,具体包括如下步骤:
步骤802,依据指示信息截取屏幕图像,生成对应的图像数据。
用户使用终端设备的过程中。若对屏幕中显示的内容感兴趣,可通过各种方式发出指示信息,如点击、滑动、手势操作等发出指示信息,然后依据该指示信息截取屏幕图像,生成对应的图像数据。
其中,可依据指示信息确定截取区域,截取所述截取区域对应的屏幕图像。可依据该指示信息确定要截取图像的截取区域,如依据点击的位置确定圆形区域,依据滑动轨迹确定圆形、三角形、方形等多边形区域等,然后截取该截取区域内的屏幕图像,生成图像数据。
步骤804,采用分类器识别图像数据中包含的内容对应的分类信息。
可将图像数据输入到分类器中进行分类处理,从而分类器能够基于图像数据中包含的内容确定对应内容的分类信息,如分类信息为衣服、短裙、风景、桂林等。
其中,所述采用分类器识别图像数据中包含的内容对应的分类信息,包括:采用分类器对图像数据进行分类处理,确定所述图像数据中包含的内容的分类结果向量,将所述分类结果向量作为分类信息。可以将图像数据输入到分类器中进行分类处理,分类器可确定图像数据中包含的内容的分类,生成相应的分类结果向量并输出,可将该分类结果向量作为分类信息。
一个示例中,该分类结果向量可依据图像数据所属各类别的概率确定,例如设置100个分类,则分类器可确定图像数据属于各类别的概率,从而生成一100维的向量,向量中每个维度对应一个类别,该维度的值为图像数据属于该类别的概率值,从而生成相应的分类结果向量。
步骤806,依据数据分析器对所述分类信息进行分析,确定所述图像数据的分享操作信息。
然后将分类信息输入到数据分析器中,通过数据分析器对分类信息进行分析处理,得到该分类信息对应图像数据的分享操作信息,例如确定通常衣服在购物程序中搜索,风景在旅游程序中查询,文本在办公程序中编辑,动画图片在即时通讯程序中发布等。
其中,可获取使用习惯信息,将所述使用习惯信息转换为使用习惯向量;将所述使用习惯向量和分类结果向量输入到数据分析器中进行分析,确定所述图像数据的分享操作信息。本申请实施例可基于用户的习惯确定对图像数据的分享操作,因此可收集使用 习惯信息,将该使用习惯信息转换为使用习惯向量,然后将使用习惯向量和分类结果向量输入到数据分析器,数据库基于图像数据的分类结果向量和使用习惯向量可确定对该图像数据的分享操作信息。所述分享操作信息包括:程序信息、操作信息,所述操作信息包括分享类型和分享内容。
步骤808,依据所述程序信息调用对应的程序,依据所述操作信息在所述程序中加载所述图像数据。
本申请实施例中,分享操作信息可能包括多个程序对应的操作信息,因此用户可选择一个程序作为调用的程序,然后依据程序信息调用该程序,再依据操作信息在程序中加载该图像数据。其中,可依据所述分享类型在所述程序中启动对应的页面;依据所述分享内容在所述页面中加载所述图像数据。可以从程序信息中的程序标识、程序名称等确定需要调用的程序,然后调用该程序,再依据分享类型确定在程序中启动的页面,如搜索页面、聊天页面、朋友圈页面、微博编辑页面等,然后依据分型内容在该页面中加载所述图像数据,用户还可添加所需的编辑信息,如朋友圈、微博的文本数据或添加其他图像数据等,在编辑完后可在该页面中发布该图像数据。
步骤810,依据发布指示发布所述图像数据。
可接收用户的发布指示,如指示发送、查询以及添加编辑信息等,然后可依据发布指示发布图像数据,完成对图像数据的分享。
本申请实施例中,可在操作系统中设备上述图像获取、分类、分享等功能,从而用户对终端设备操作过程中可依据需求随时分享各种信息,便捷的实现对信息的搜索、发布、定位查询等分享操作。因此可在操作系统中设置相应的功能接口API(Application Programming Interface,应用程序编程接口),从而在检测到手势操作后可生成指示信息,然后调用该功能接口,通过该功能接口实现对图像数据的分类信息、分享操作信息的确定并调用程序进行发布等。
在一个示例中,如图9所示,可在终端设备的操作系统中设置如下模块:功能接口902和处理模块904,其中功能接口902可包括各种接口,如手势识别接口、图像截取接口、图像识别接口以及分享所需调用程序的接口等。处理模块依据相应的处理逻辑构建,可包括图像处理单元9042、图像截取单元9044、内容分类单元9046和图像分享单元9048,其中,对于点击、滑动手势等操作图像处理单元9042可基于窗口管理器识别手势以及手势作用的窗口区域的信息,然后输入给图像截取单元9044,图像截取单元9044基于图像合成器可调用GPU等处理设备截取相应窗口区域的图像,生成图像数据。然后可将图 像数据输入到内容分类单元9046中,通过分类器的处理得到相应的分类信息,再将分类信息输入到图像分享单元9048,图像分享单元9048将该分类信息结合用户使用习惯确定出分享操作信息,并基于应用管理器来调用相应的程序分享该图像数据。从而通过于内容分类和用户使用习惯的机器学习来学习用户对内容分享的需求,提供智能与便捷用户体验。
从而在一个示例中,基于上述模块提供的功能,可通过手势截取屏幕图像,然后通过分类处理、分享操作的分析等处理,确定出分享操作信息,其中该分享操作信息可展示分享操作的列表供用户选择,如对于识别图像得到的分类信息天目山,确定程序包括即时通讯程序、旅游程序、地图程序等,然后可接收用户的选择指示确定启动旅游程序,在该旅游程序中自动搜索图像对应内容“天目山”的旅游产品信息呈现给用户。
并且,对于上述用户的选择等使用习惯信息可反馈给系统,基于该使用习惯信息自动学习用户的行为,用以训练分类器、数据分析器等,那么经过多次学习之后,下次用户再分享类似的内容,比如图像对应内容的分类信息识别为“黄山”或“风景区”等,则通过处理得到分享操作信息后,可自动启动旅游程序,搜索图像数据对应的内容“黄山”或“风景区”等,来获取相应地点的旅游产品信息呈现给用户,无需用户再次选择,满足用户的需求,体现“懂得”用户的智能与便捷。
需要说明的是,对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请实施例并不受所描述的动作顺序的限制,因为依据本申请实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本申请实施例所必须的。
在上述实施例的基础上,本实施例还提供了一种数据处理装置,可以应用于终端设备、服务器等电子设备中。
参照图10,示出了本申请的一种数据处理装置实施例的结构框图,具体可以包括如下模块:
获取模块1002,用于获取图像数据。
识别模块1004,用于识别图像数据的分类信息,并依据所述分类信息确定对应的分享操作信息。
分享模块1006,用于依据所述分享操作信息调用对应的程序,采用所述程序发布所 述图像数据。
综上,可获取需要分享的图像数据,然后确定该图像数据的分类信息,如确定图像中内容的分类,并依据该分类信息确定对应的分享操作信息,然后可依据分享操作信息发布所述图像数据,从而能够自动确定对图像数据的分享操作并执行分享,提高信息的处理效率。
参照图11,示出了本申请的另一种数据处理装置实施例的结构框图,具体可以包括如下模块:
获取模块1002,用于获取图像数据。
识别模块1004,用于识别图像数据的分类信息,并依据所述分类信息确定对应的分享操作信息。
分享模块1006,用于依据所述分享操作信息调用对应的程序,采用所述程序发布所述图像数据。
反馈模块1008,用于将所述分享操作信息添加到使用习惯信息中。
其中,所述识别模块1004,包括分类子模块10042和分享操作子模块10044,其中:
所述分类子模块10042,用于采用分类器识别图像数据中包含的内容对应的分类信息。
所述分享操作子模块10044,用于依据数据分析器对所述分类信息进行分析,确定所述图像数据的分享操作信息。
所述分类子模块10042,用于采用分类器对图像数据进行分类处理,确定所述图像数据中包含的内容的分类结果向量,将所述分类结果向量作为分类信息。
所述分享操作子模块10044,用于获取使用习惯信息,将所述使用习惯信息转换为使用习惯向量;将所述使用习惯向量和分类结果向量输入到数据分析器中进行分析,确定所述图像数据的分享操作信息
所述分享操作信息包括:程序信息、操作信息。分享模块1006,包括:程序调用子模块10062和数据分享子模块10064,其中:
所述程序调用子模块10062,用于依据所述程序信息调用对应的程序,依据所述操作信息在所述程序中加载所述图像数据。
所述数据分享子模块10064,用于依据发布指示发布所述图像数据。
所述操作信息包括分享类型和分享内容。所述程序调用子模块10062,用于依据所述分享类型在所述程序中启动对应的页面;依据所述分享内容在所述页面中加载所述图 像数据。
所述获取模块1002,用于依据指示信息截取屏幕图像,生成对应的图像数据。
所述获取模块1002,用于依据指示信息确定截取区域,截取所述截取区域对应的屏幕图像。所述指示信息依据以下至少一种操作触发包括:点击操作、手势操作、滑动操作。
本申请实施例中,可在操作系统中设备上述图像获取、分类、分享等功能,从而用户对终端设备操作过程中可依据需求随时分享各种信息,便捷的实现对信息的搜索、发布、定位查询等分享操作。因此可在操作系统中设置相应的功能接口API(Application Programming Interface,应用程序编程接口),从而在检测到手势操作后可生成指示信息,然后调用该功能接口,通过该功能接口实现对图像数据的分类信息、分享操作信息的确定并调用程序进行发布等。
本申请实施例中,在截取图像后可进行内容分类,如基于分类器识别图像中内容的分类信息,该分类器可基于CNN等模型训练得到,从而在终端设备上对图像进行智能分类。并且,用户还可对分类结果进行修正,例如图像分类为衣服,用户在购物程序中搜索时还可添加搜索信息如T恤等,从而修正信息还可上传到服务器中,作为训练数据便于后续调整分类器,提高分类的准确性。
本申请实施例还提供了一种非易失性可读存储介质,该存储介质中存储有一个或多个模块(programs),该一个或多个模块被应用在终端设备时,可以使得该终端设备执行本申请实施例中各方法步骤的指令(instructions)。
本申请实施例提供了一个或多个机器可读介质,其上存储有可执行代码,当所述可执行代码被执行时,使得处理器执行如本申请实施例中一个或多个所述的数据处理方法。其中,所述电子设备包括终端设备、服务器(集群)等设备。本申请实施例中,终端设备指的是具有终端操作系统的设备,这些设备可支持音频、视频、数据等方面的功能,包括移动终端如智能手机、平板电脑、可穿戴设备,也可以是智能电视、个人计算机等设备。操作系统如AliOS、IOS、Android、Windows等。
图12为本申请一实施例提供的电子设备的硬件结构示意图,该电子设备可包括终端设备、服务器(集群)等设备。如图12所示,该电子设备可以包括输入设备120、处理器121、输出设备122、存储器123和至少一个通信总线124。通信总线124用于实现元件之间的通信连接。存储器123可能包含高速RAM(Random Access Memory,随机存取存储器),也可能还包括非易失性存储NVM(Non-Volatile Memory),例如至少一个 磁盘存储器,存储器123中可以存储各种程序,用于完成各种处理功能以及实现本实施例的方法步骤。
可选的,上述处理器121例如可以为中央处理器(Central Processing Unit,简称CPU)、应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,该处理器121通过有线或无线连接耦合到上述输入设备120和输出设备122。
可选的,上述输入设备120可以包括多种输入设备,例如可以包括面向用户的用户接口、面向设备的设备接口、软件的可编程接口、摄像头、传感器中至少一种。可选的,该面向设备的设备接口可以是用于设备与设备之间进行数据传输的有线接口、还可以是用于设备与设备之间进行数据传输的硬件插入接口(例如USB接口、串口等);可选的,该面向用户的用户接口例如可以是面向用户的控制按键、用于接收语音输入的语音输入设备以及用户接收用户触摸输入的触摸感知设备(例如具有触摸感应功能的触摸屏、触控板等);可选的,上述软件的可编程接口例如可以是供用户编辑或者修改程序的入口,例如芯片的输入引脚接口或者输入接口等;可选的,上述收发信机可以是具有通信功能的射频收发芯片、基带处理芯片以及收发天线等。麦克风等音频输入设备可以接收语音数据。输出设备122可以包括显示器、音响等输出设备。
在本实施例中,该设备的处理器包括用于执行各电子设备中网络管理装置各模块的功能,具体功能和技术效果参照上述实施例即可,此处不再赘述。
图13为本申请另一实施例提供的电子设备的硬件结构示意图。图13是对图12在实现过程中的一个具体的实施例。如图13所示,本实施例的电子设备包括处理器131以及存储器132。
处理器131执行存储器132所存放的计算机程序代码,实现上述实施例中图1至图9的数据处理方法。
存储器132被配置为存储各种类型的数据以支持在电子设备的操作。这些数据的示例包括用于在电子设备上操作的任何应用程序或方法的指令,例如消息,图片,视频等。存储器132可能包含随机存取存储器RAM,也可能还包括非易失性存储器NVM,例如至少一个磁盘存储器。
可选地,处理器131设置在处理组件130中。该电子设备还可以包括:通信组件133,电源组件134,多媒体组件135,音频组件136,输入/输出接口137和/或传感器组件138。 设备具体所包含的组件等依据实际需求设定,本实施例对此不作限定。
处理组件130通常控制设备的整体操作。处理组件130可以包括一个或多个处理器131来执行指令,以完成上述图1至图9方法的全部或部分步骤。此外,处理组件130可以包括一个或多个模块,便于处理组件130和其他组件之间的交互。例如,处理组件130可以包括多媒体模块,以方便多媒体组件135和处理组件130之间的交互。
电源组件134为设备的各种组件提供电力。电源组件134可以包括电源管理系统,一个或多个电源,及其他与为电子设备生成、管理和分配电力相关联的组件。
多媒体组件135包括在设备和用户之间的提供一个输出接口的显示屏。在一些实施例中,显示屏可以包括液晶显示器(LCD)和触摸面板(TP)。如果显示屏包括触摸面板,显示屏可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。
音频组件136被配置为输出和/或输入音频信号。例如,音频组件136包括一个麦克风(MIC),当设备处于操作模式,如语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器132或经由通信组件133发送。在一些实施例中,音频组件136还包括一个扬声器,用于输出音频信号。
输入/输出接口137为处理组件130和外围接口模块之间提供接口,上述外围接口模块可以是点击轮,按钮等。这些按钮可包括但不限于:音量按钮、启动按钮和锁定按钮。
传感器组件138包括一个或多个传感器,用于为设备提供各个方面的状态评估。例如,传感器组件138可以检测到设备的打开/关闭状态,组件的相对定位,用户与设备接触的存在或不存在。传感器组件138可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在,包括检测用户与设备间的距离。在一些实施例中,该传感器组件138还可以包括摄像头等。
通信组件133被配置为便于电子设备和其他电子设备之间有线或无线方式的通信。电子设备可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个实施例中,该电子设备中可以包括SIM卡插槽,该SIM卡插槽用于插入SIM卡,使得设备可以登录GPRS网络,通过互联网与服务器建立通信。
由上可知,在图13实施例中所涉及的通信组件133、音频组件136以及输入/输出接口137、传感器组件138均可以作为图12实施例中的输入设备的实现方式。
本申请实施例提供了一种电子设备,包括:处理器;和存储器,其上存储有可执行 代码,当所述可执行代码被执行时,使得所述处理器执行如本申请实施例中一个或多个所述的数据处理方法。
本申请实施例还提供一种用于电子设备的操作系统,如图14所示,该终端设备的操作系统包括:处理单元1402和分享单元1404。
处理单元1402,获取图像数据;识别图像数据的分类信息,并依据所述分类信息确定对应的分享操作信息。
分享单元1404,依据所述分享操作信息调用对应的程序,采用所述程序发布所述图像数据。
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
本领域内的技术人员应明白,本申请实施例的实施例可提供为方法、装置、或计算机程序产品。因此,本申请实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请实施例是参照根据本申请实施例的方法、终端设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理终端设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理终端设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理终端设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理终端设备上,使得在 计算机或其他可编程终端设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程终端设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本申请实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请实施例范围的所有变更和修改。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。
以上对本申请所提供的一种数据处理方法,一种数据处理装置,一种电子设备以及一种机器可读介质,进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (16)

  1. 一种数据处理方法,其特征在于,包括:
    获取图像数据;
    识别图像数据的分类信息,并依据所述分类信息确定对应的分享操作信息;
    依据所述分享操作信息调用对应的程序,采用所述程序发布所述图像数据。
  2. 根据权利要求1所述的方法,其特征在于,所述识别图像数据的分类信息,包括:
    采用分类器识别图像数据中包含的内容对应的分类信息。
  3. 根据权利要求2所述的方法,其特征在于,所述采用分类器识别图像数据中包含的内容对应的分类信息,包括:
    采用分类器对图像数据进行分类处理,确定所述图像数据中包含的内容的分类结果向量,将所述分类结果向量作为分类信息。
  4. 根据权利要求3所述的方法,其特征在于,所述依据所述分类信息确定对应的分享操作信息,包括:
    依据数据分析器对所述分类信息进行分析,确定所述图像数据的分享操作信息。
  5. 根据权利要求4所述的方法,其特征在于,所述依据数据分析器对所述分类信息进行分析,确定所述图像数据的分享操作信息,包括:
    获取使用习惯信息,将所述使用习惯信息转换为使用习惯向量;
    将所述使用习惯向量和分类结果向量输入到数据分析器中进行分析,确定所述图像数据的分享操作信息。
  6. 根据权利要求1所述的方法,其特征在于,所述分享操作信息包括:程序信息、操作信息。
  7. 根据权利要求6所述的方法,其特征在于,所述依据分享操作信息调用对应的程序,采用所述程序发布所述图像数据,包括:
    依据所述程序信息调用对应的程序,依据所述操作信息在所述程序中加载所述图像数据;
    依据发布指示发布所述图像数据。
  8. 根据权利要求7所述的方法,其特征在于,所述操作信息包括分享类型和分享内容,所述依据操作信息在所述程序中加载所述图像数据,包括:
    依据所述分享类型在所述程序中启动对应的页面;
    依据所述分享内容在所述页面中加载所述图像数据。
  9. 根据权利要求1所述的方法,其特征在于,所述获取图像数据,包括:
    依据指示信息截取屏幕图像,生成对应的图像数据。
  10. 根据权利要求9所述的方法,其特征在于,所述依据指示信息截取屏幕图像,包括:
    依据指示信息确定截取区域,截取所述截取区域对应的屏幕图像。
  11. 根据权利要求1所述的方法,其特征在于,所述发布图像数据之后,还包括:
    将所述分享操作信息添加到使用习惯信息中。
  12. 根据权利要求9所述的方法,其特征在于,所述指示信息依据以下至少一种操作触发包括:点击操作、手势操作、滑动操作。
  13. 一种数据处理装置,其特征在于,包括:
    获取模块,用于获取图像数据;
    识别模块,用于识别图像数据的分类信息,并依据所述分类信息确定对应的分享操作信息;
    分享模块,用于依据所述分享操作信息调用对应的程序,采用所述程序发布所述图像数据。
  14. 一种电子设备,其特征在于,包括:
    处理器;和
    存储器,其上存储有可执行代码,当所述可执行代码被执行时,使得所述处理器执行如权利要求1-12中一个或多个所述的数据处理方法。
  15. 一个或多个机器可读介质,其特征在于,其上存储有可执行代码,当所述可执行代码被执行时,使得处理器执行如权利要求1-12中一个或多个所述的数据处理方法。
  16. 一种用于电子设备的操作系统,其特征在于,包括:
    处理单元,获取图像数据;识别图像数据的分类信息,并依据所述分类信息确定对应的分享操作信息;
    分享单元,依据所述分享操作信息调用对应的程序,采用所述程序发布所述图像数据。
PCT/CN2019/089772 2018-06-07 2019-06-03 一种数据处理方法、装置、电子设备和可读介质 WO2019233365A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/108,996 US20210150243A1 (en) 2018-06-07 2020-12-01 Efficient image sharing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810581577.4 2018-06-07
CN201810581577.4A CN110580486B (zh) 2018-06-07 2018-06-07 一种数据处理方法、装置、电子设备和可读介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/108,996 Continuation-In-Part US20210150243A1 (en) 2018-06-07 2020-12-01 Efficient image sharing

Publications (1)

Publication Number Publication Date
WO2019233365A1 true WO2019233365A1 (zh) 2019-12-12

Family

ID=68769717

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/089772 WO2019233365A1 (zh) 2018-06-07 2019-06-03 一种数据处理方法、装置、电子设备和可读介质

Country Status (4)

Country Link
US (1) US20210150243A1 (zh)
CN (1) CN110580486B (zh)
TW (1) TW202001685A (zh)
WO (1) WO2019233365A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113194323B (zh) * 2021-04-27 2023-11-10 口碑(上海)信息技术有限公司 信息交互方法、多媒体信息互动方法以及装置
CN113971136B (zh) * 2021-12-03 2022-09-09 杭银消费金融股份有限公司 基于图像识别的页面测试方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105653152A (zh) * 2015-12-23 2016-06-08 北京金山安全软件有限公司 一种图片处理方法、装置及电子设备
CN107301204A (zh) * 2017-05-27 2017-10-27 深圳市金立通信设备有限公司 一种分享文件的方法及终端
CN107465949A (zh) * 2017-07-13 2017-12-12 彭茂笑 一种在智能终端上保持多媒体信息实时显示的分享方法
CN107590006A (zh) * 2017-09-05 2018-01-16 广东欧珀移动通信有限公司 文件处理方法、装置及移动终端

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001117995A (ja) * 1999-10-21 2001-04-27 Vision Arts Kk 電子商取引システム及び電子商取引方法、識別情報付加装置及び識別情報付加プログラムを記録したコンピュータ読み取り可能な記録媒体、取引情報提供装置及び取引情報提供プログラムを記録したコンピュータ読み取り可能な記録媒体、決済情報提供装置及び決済情報提供プログラムを記録したコンピュータ読み取り可能な記録媒体、決済処理装置及び決済処理プログラムを記録したコンピュータ読み取り可能な記録媒体、電子商取引端末及び電子商取引プログラムを記録したコンピュータ読み取り可能な記録媒体
CN103338405A (zh) * 2013-06-03 2013-10-02 四川长虹电器股份有限公司 一种截屏应用的方法、设备及系统
CN104657423B (zh) * 2015-01-16 2018-07-06 白天 应用间内容分享方法及其装置
CN108076280A (zh) * 2016-11-11 2018-05-25 北京佳艺徕经贸有限责任公司 一种基于图像识别的影像分享方法及装置
CN107450796B (zh) * 2017-06-30 2019-10-08 努比亚技术有限公司 一种图片处理方法、移动终端和计算机可读存储介质
CN108108102B (zh) * 2018-01-02 2024-01-23 联想(北京)有限公司 图像推荐方法及电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105653152A (zh) * 2015-12-23 2016-06-08 北京金山安全软件有限公司 一种图片处理方法、装置及电子设备
CN107301204A (zh) * 2017-05-27 2017-10-27 深圳市金立通信设备有限公司 一种分享文件的方法及终端
CN107465949A (zh) * 2017-07-13 2017-12-12 彭茂笑 一种在智能终端上保持多媒体信息实时显示的分享方法
CN107590006A (zh) * 2017-09-05 2018-01-16 广东欧珀移动通信有限公司 文件处理方法、装置及移动终端

Also Published As

Publication number Publication date
TW202001685A (zh) 2020-01-01
CN110580486A (zh) 2019-12-17
US20210150243A1 (en) 2021-05-20
CN110580486B (zh) 2024-04-12

Similar Documents

Publication Publication Date Title
US20200412975A1 (en) Content capture with audio input feedback
US11582176B2 (en) Context sensitive avatar captions
US11989938B2 (en) Real-time tracking-compensated image effects
US20220137761A1 (en) Interface to display shared user groups
US11157694B2 (en) Content suggestion system
WO2020205246A1 (en) Dynamic media selection menu
US11675831B2 (en) Geolocation based playlists
WO2019105457A1 (zh) 图像处理方法、计算机设备和计算机可读存储介质
US11574005B2 (en) Client application content classification and discovery
CN106919326A (zh) 一种图片搜索方法及装置
WO2019233365A1 (zh) 一种数据处理方法、装置、电子设备和可读介质
US20210303112A1 (en) Interactive messaging stickers
US20240045899A1 (en) Icon based tagging
WO2022212669A1 (en) Determining classification recommendations for user content
US11494052B1 (en) Context based interface options
CN113835582B (zh) 一种终端设备、信息显示方法和存储介质
WO2022156557A1 (zh) 图像显示方法、装置、设备及介质
US20190228454A1 (en) Electronic apparatus and controlling method thereof
KR20230162696A (ko) 사용자 콘텐츠를 위한 분류 추천들의 결정
KR20230159613A (ko) 추가적인 텍스트 콘텐츠를 포함하는 수정된 사용자 콘텐츠 생성
CN117280337A (zh) 生成包括附加文本内容的修改的用户内容

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19814713

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19814713

Country of ref document: EP

Kind code of ref document: A1