CN111796701A - Model training method, operation processing method, device, storage medium and equipment - Google Patents

Model training method, operation processing method, device, storage medium and equipment Download PDF

Info

Publication number
CN111796701A
CN111796701A CN201910282473.8A CN201910282473A CN111796701A CN 111796701 A CN111796701 A CN 111796701A CN 201910282473 A CN201910282473 A CN 201910282473A CN 111796701 A CN111796701 A CN 111796701A
Authority
CN
China
Prior art keywords
information
historical
instruction
data
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910282473.8A
Other languages
Chinese (zh)
Inventor
陈仲铭
何明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910282473.8A priority Critical patent/CN111796701A/en
Publication of CN111796701A publication Critical patent/CN111796701A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/0418Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a model training method, an operation processing device, a storage medium and equipment. The method comprises the following steps: acquiring a plurality of data to be trained, wherein the data to be trained comprises historical control information and first scene information corresponding to the historical control information, the historical control information is information corresponding to a historical control instruction received by the electronic equipment, the historical control instruction is a wrong control instruction or a non-wrong control instruction, and the first scene information is scene information corresponding to a scene where the electronic equipment is located when the electronic equipment receives the historical control instruction; and performing model training according to the plurality of data to be trained to obtain an instruction prediction model for predicting whether the control instruction is a misoperation instruction. The method and the device can improve the accuracy of judging the current operation of the user.

Description

Model training method, operation processing method, device, storage medium and equipment
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a model training method, an operation processing method, an apparatus, a storage medium, and a device.
Background
At present, various electronic equipment manufacturers have successively introduced various electronic equipment such as a full-face screen, a curved-face screen, a bang screen and the like, and the electronic equipment is not an exception of the electronic equipment with a high display screen ratio. Because the display screen of the electronic equipment is high in percentage, a user can easily touch the display screen by mistake, so that the electronic equipment receives some mistaken touch instructions and mistakenly executes the operation corresponding to the mistaken touch instructions. Therefore, the electronic device is required to perform a mis-touch judgment on the received touch instruction to judge whether the touch instruction of the user is a mis-touch instruction.
Disclosure of Invention
The embodiment of the application provides a model training method, an operation processing device, a storage medium and equipment, which can improve the accuracy of judging the current operation of a user.
In a first aspect, an embodiment of the present application provides a model training method, including:
acquiring a plurality of data to be trained, wherein the data to be trained comprises historical control information and first scene information corresponding to the historical control information, the historical control information is information corresponding to a historical control instruction received by the electronic equipment, the historical control instruction is a wrong control instruction or a non-wrong control instruction, and the first scene information is scene information corresponding to a scene where the electronic equipment is located when the electronic equipment receives the historical control instruction;
and performing model training according to the plurality of data to be trained to obtain an instruction prediction model for predicting whether the control instruction is a misoperation instruction.
In a second aspect, an embodiment of the present application provides an operation processing method, including:
acquiring data to be identified, wherein the data to be identified comprises current control information and second scene information corresponding to the current control information, the current control information is information corresponding to a current control instruction received by the electronic equipment, and the second scene information is scene information corresponding to a scene where the electronic equipment is located when the electronic equipment receives the current control instruction;
inputting the data to be identified into a pre-trained instruction prediction model, and predicting whether a current control instruction received by the electronic equipment is a wrong control instruction;
and the instruction prediction model is obtained by performing model training according to a plurality of data to be trained.
In a third aspect, an embodiment of the present application provides a model training apparatus, including:
the training device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a plurality of data to be trained, the data to be trained comprises historical control information and first scene information corresponding to the historical control information, the historical control information is information corresponding to a historical control instruction received by the electronic equipment, the historical control instruction is a wrong control instruction or a non-wrong control instruction, and the first scene information is scene information corresponding to a scene where the electronic equipment is located when the electronic equipment receives the historical control instruction;
and the training module is used for carrying out model training according to the plurality of data to be trained to obtain an instruction prediction model for predicting whether the control instruction is the misoperation instruction or not.
In a fourth aspect, an embodiment of the present application provides an operation processing apparatus, including:
the electronic equipment comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring data to be identified, the data to be identified comprises current control information and second scene information corresponding to the current control information, the current control information is information corresponding to a current control instruction received by the electronic equipment, and the second scene information is scene information corresponding to a scene where the electronic equipment is located when the electronic equipment receives the current control instruction;
the prediction module is used for inputting the data to be recognized into a pre-trained instruction prediction model and predicting whether a current control instruction received by the electronic equipment is a wrong control instruction or not;
and the instruction prediction model is obtained by performing model training according to a plurality of data to be trained.
In a fifth aspect, the present application provides a storage medium, on which a computer program is stored, where the computer program, when executed on a computer, causes the computer to execute the model training method provided in the present application, or causes the computer to execute the operation processing method provided in the present application.
In a sixth aspect, an embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is configured to execute the model training method provided in the embodiment of the present application, or is configured to execute the operation processing method provided in the embodiment of the present application, by calling a computer program stored in the memory.
In the model training method provided by the embodiment of the application, each piece of data to be trained acquired by the electronic device combines information corresponding to a historical control instruction received by the electronic device, and scene information corresponding to a scene where the electronic device is located when the electronic device receives the historical control instruction, so that the accuracy of judging whether the control instruction received by the electronic device is a wrong control instruction or not is higher by using an instruction prediction model obtained by training a plurality of pieces of data to be trained, namely, the accuracy of judging whether the operation of a user is an effective operation or not is higher. In the operation processing method provided by the embodiment of the present application, the instruction prediction model trained by the model training method provided by the embodiment of the present application is used to predict whether the operation instruction corresponding to the current operation of the user is the incorrect operation instruction, so that the accuracy of determining whether the current operation of the user is the incorrect operation can be improved.
Drawings
The technical solutions and advantages of the present application will become apparent from the following detailed description of specific embodiments of the present application when taken in conjunction with the accompanying drawings.
Fig. 1 is a schematic structural diagram of a panoramic sensing architecture provided in an embodiment of the present application.
Fig. 2 is a schematic flowchart of a first method for training a model according to an embodiment of the present disclosure.
Fig. 3 is a schematic flowchart of a second method for training a model according to an embodiment of the present disclosure.
Fig. 4 is a third flowchart illustrating a model training method according to an embodiment of the present application.
Fig. 5 is a fourth flowchart illustrating a model training method according to an embodiment of the present application.
Fig. 6 is a schematic flowchart of an operation processing method provided in an embodiment of the present application.
Fig. 7 is a schematic structural diagram of a model training apparatus according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of an operation processing apparatus according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of a first electronic device according to an embodiment of the present application.
Fig. 10 is a schematic structural diagram of a second electronic device according to an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present application are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the application and should not be taken as limiting the application with respect to other embodiments that are not detailed herein.
In the related art, two ways are generally adopted for the false touch determination. The first is to perform a false touch determination by detection of an electronic circuit. Specifically, whether the edge or the operation mode of a user meets a certain threshold value is determined through a driving signal sent by the display screen, if yes, a false touch is determined, and otherwise, the display screen is normally touched. The other method is to judge the touch instruction by a program, calculate the touch instruction by using a register such as an accumulator, and set a threshold value to judge whether the touch instruction meets the false touch condition. However, the two methods do not consider the current environment and state of the electronic device, and therefore the accuracy of the mistouch determination performed by the two methods is low.
With the miniaturization and intellectualization of sensors, electronic devices such as mobile phones and tablet computers integrate more and more sensors, such as light sensors, distance sensors, position sensors, acceleration sensors, gravity sensors, and the like. The electronic device can acquire more data with less power consumption through the configured sensor. Meanwhile, the electronic device can acquire data related to the state of the electronic device and data related to the state of the user during operation. In general, an electronic device can obtain data related to an external environment, data related to a user state, and data related to a state of the electronic device.
In the embodiment of the application, in order to process the data acquired by the electronic device, a panoramic sensing architecture is provided. Fig. 1 is a schematic structural diagram of a panoramic sensing architecture provided in an embodiment of the present application, and is applied to an electronic device, and the panoramic sensing architecture includes, from bottom to top, an information sensing layer, a data processing layer, a feature extraction layer, a scenario modeling layer, and an intelligent service layer.
The information perception layer is used for acquiring information of the electronic equipment or information in an external environment. The information-perceiving layer may include a plurality of sensors. For example, the information sensing layer includes a plurality of sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, an attitude sensor, a barometer, and a heart rate sensor.
Among other things, a distance sensor may be used to detect a distance between the electronic device and an external object. The magnetic field sensor may be used to detect magnetic field information of the environment in which the electronic device is located. The light sensor can be used for detecting light information of the environment where the electronic equipment is located. The acceleration sensor may be used to detect acceleration data of the electronic device. The fingerprint sensor may be used to collect fingerprint information of a user. The Hall sensor is a magnetic field sensor manufactured according to the Hall effect, and can be used for realizing automatic control of electronic equipment. The location sensor may be used to detect the geographic location where the electronic device is currently located. Gyroscopes may be used to detect angular velocity of an electronic device in various directions. Inertial sensors may be used to detect motion data of an electronic device. The gesture sensor may be used to sense gesture information of the electronic device. A barometer may be used to detect the barometric pressure of the environment in which the electronic device is located. The heart rate sensor may be used to detect heart rate information of the user.
And the data processing layer is used for processing the data acquired by the information perception layer. For example, the data processing layer may perform data cleaning, data integration, data transformation, data reduction, and the like on the data acquired by the information sensing layer.
The data cleaning refers to cleaning a large amount of data acquired by the information sensing layer to remove invalid data and repeated data. The data integration refers to integrating a plurality of single-dimensional data acquired by the information perception layer into a higher or more abstract dimension so as to comprehensively process the data of the plurality of single dimensions. The data transformation refers to performing data type conversion or format conversion on the data acquired by the information sensing layer so that the transformed data can meet the processing requirement. The data reduction means that the data volume is reduced to the maximum extent on the premise of keeping the original appearance of the data as much as possible.
The characteristic extraction layer is used for extracting characteristics of the data processed by the data processing layer so as to extract the characteristics included in the data. The extracted features may reflect the state of the electronic device itself or the state of the user or the environmental state of the environment in which the electronic device is located, etc.
The feature extraction layer may extract features or process the extracted features by a method such as a filtering method, a packing method, or an integration method.
The filtering method is to filter the extracted features to remove redundant feature data. Packaging methods are used to screen the extracted features. The integration method is to integrate a plurality of feature extraction methods together to construct a more efficient and more accurate feature extraction method for extracting features.
The scene modeling layer is used for building a model according to the features extracted by the feature extraction layer, and the obtained model can be used for representing the state of the electronic equipment, the state of a user, the environment state and the like. For example, the scenario modeling layer may construct a key value model, a pattern identification model, a graph model, an entity relation model, an object-oriented model, and the like according to the features extracted by the feature extraction layer.
The intelligent service layer is used for providing intelligent services for the user according to the model constructed by the scene modeling layer. For example, the intelligent service layer can provide basic application services for users, perform system intelligent optimization for electronic equipment, and provide personalized intelligent services for users.
In addition, the panoramic perception architecture can further comprise a plurality of algorithms, each algorithm can be used for analyzing and processing data, and the plurality of algorithms can form an algorithm library. For example, the algorithm library may include algorithms such as a markov algorithm, a hidden dirichlet distribution algorithm, a bayesian classification algorithm, a support vector machine, a K-means clustering algorithm, a K-nearest neighbor algorithm, a conditional random field, a residual error network, a long-short term memory network, a convolutional neural network, and a cyclic neural network.
The embodiment of the application provides a model training method/operation processing method, which can be applied to electronic equipment. The electronic device may be a smartphone, a tablet computer, a gaming device, an AR (Augmented Reality) device, an automobile, a data storage device, an audio playback device, a video playback device, a laptop computer, a desktop computing device, a wearable device such as an electronic watch, an electronic glasses, an electronic helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like.
Referring to fig. 2, fig. 2 is a first flowchart illustrating a model training method according to an embodiment of the present disclosure. The flow of the model training method can comprise the following steps:
in 101, a plurality of data to be trained are obtained, where the data to be trained includes historical manipulation information and first scenario information corresponding to the historical manipulation information, the historical manipulation information is information corresponding to a historical manipulation instruction received by an electronic device, the historical manipulation instruction is a wrong manipulation instruction or a non-wrong manipulation instruction, and the first scenario information is scenario information corresponding to a scenario in which the electronic device is located when the electronic device receives the historical manipulation instruction.
In the process of using the electronic equipment by a user, the electronic equipment receives a historical control instruction every time the user performs one click, multiple clicks, gesture operation, floating click or other operations on a display screen of the electronic equipment.
For example, the electronic device may obtain a position of the historical manipulation instruction relative to the display screen, and determine the historical manipulation information according to the position of the historical manipulation instruction relative to the display screen. Or, assuming that the historical manipulation instruction is an instruction corresponding to a gesture operation, the electronic device may obtain a position, a gesture size, and a gesture direction of the historical manipulation instruction relative to the display screen, and determine the historical manipulation information according to the position, the gesture size, and the gesture direction of the historical manipulation instruction relative to the display screen. Or the electronic device may acquire the coordinates of the historical manipulation instruction relative to the display screen and the manner of manipulating the display screen, and determine the historical manipulation information according to the coordinates of the historical manipulation instruction relative to the display screen and the manner of manipulating the display screen.
In this embodiment, the electronic device may obtain scene information where the electronic device is located when the history manipulation instruction is received, and determine the scene information as the first scene information. The scene where the electronic equipment is located comprises the scene where the electronic equipment is located and the state of the electronic equipment.
For example, the scenario in which the electronic device is located may be: mall, office, conference room, game room, video hall, entertainment or music, etc. The state of the electronic device may be: pocket, user's hand, table, running or walking, etc. For example, when the electronic device receives the history control instruction, it is acquired that the scene where the electronic device is located is a shopping mall and the state of the electronic device is a pocket. Then, the electronic device may obtain scene information corresponding to a scene (i.e., a mall, a pocket) where the electronic device is located.
Therefore, the electronic device can determine the historical control information and the corresponding scene information according to the historical control instruction once the historical control instruction is received, and determine the data to be trained according to the historical control information and the corresponding scene information.
In some embodiments, the electronic device may determine 10 pieces of historical manipulation information and corresponding scenario information thereof according to the received 10 times of historical manipulation instructions, and determine data to be trained according to the 10 pieces of historical manipulation information and corresponding scenario information thereof. The 10 historical manipulation instructions may be faulty manipulation instructions or non-faulty manipulation instructions. Alternatively, the 10 historical manipulation instructions may include only non-faulty manipulation instructions. The misoperation instruction refers to an instruction corresponding to misoperation of the display screen. For example, the electronic device receives a wrong manipulation instruction when the user accidentally touches the display screen with his hand. The non-misoperation instruction refers to an instruction corresponding to correct operation of the display screen. For example, if a user wants to enter a home page of an application, the user may click an interface corresponding to the application, and the electronic device receives a non-error manipulation instruction.
At 102, model training is performed according to a plurality of data to be trained, and an instruction prediction model is obtained for predicting whether the control instruction is a wrong control instruction.
For example, after the electronic device obtains a plurality of data to be trained, the plurality of data to be trained may be input into a preset model for training to obtain an instruction prediction model for predicting whether the control instruction is a wrong control instruction.
For example, the electronic device may train a plurality of data to be trained by using an SVM classification algorithm model to obtain a trained SVM classification algorithm model, where the trained SVM classification algorithm model is an instruction prediction model, and an output of the instruction prediction model includes two types, i.e., 0 and 1, where 1 represents a non-erroneous operation instruction and 0 represents an erroneous operation instruction.
It should be noted that the above SVM classification algorithm model is only an example and is not intended to limit the present application. In this embodiment, what kind of classification algorithm is adopted can be set by a person skilled in the art according to actual needs, and besides the above SVM classification algorithm model, the classification algorithm may also include, but is not limited to, a naive bayes classification algorithm, a support vector machine algorithm, a KNN algorithm, a neural network algorithm, and the like.
Referring to fig. 3, fig. 3 is a second flowchart illustrating a model training method according to an embodiment of the present disclosure. The model training method can comprise the following steps:
in 201, the electronic device acquires a plurality of historical manipulation information and corresponding first scene information according to a time sequence.
For example, the electronic device may obtain a plurality of historical manipulation information and corresponding first scenario information thereof in a time sequence.
E.g. t1At any moment, the user clicks the display screen of the electronic equipment once, and the electronic equipment receives the historical control instruction a1. The electronic equipment can control the instruction a according to the history1Determining historical manipulation information b1And history control information b1Corresponding first scene information c1. The electronic device can also mark the historical control information to mark whether the historical control information is information corresponding to the misoperation instruction received by the electronic device. For example, when calendarHistory control information b1When the information corresponding to the misoperation instruction received by the electronic equipment is the information, the electronic equipment can control the historical operation information b1And carrying out misoperation marking. When history is manipulated information b1When the information corresponding to the non-error control instruction received by the electronic equipment is used, the electronic equipment can control the historical control information b1And carrying out non-mishandling marking.
t2At the moment, the user clicks the display screen of the electronic equipment for multiple times, and the electronic equipment receives the historical control instruction a2. The electronic equipment can control the instruction a according to the history2Determining historical manipulation information b2And history control information b2Corresponding second scene information c2
t3At any moment, the user carries out gesture operation on the display screen of the electronic equipment, and the electronic equipment receives the historical control command a3. The electronic equipment can control the instruction a according to the history3Determining historical manipulation information b3And history control information b3Corresponding second scene information c3
In 202, when the acquired first historical manipulation information is information corresponding to a mis-manipulation instruction received by the electronic device, the electronic device determines first data to be trained according to the first historical manipulation information and third scene information corresponding to the first historical manipulation information, and historical manipulation information before the first historical manipulation information and first scene information corresponding to the first historical manipulation information.
Wherein the first historical manipulation information is included within a plurality of historical manipulation information.
For example, after the electronic device acquires the plurality of historical manipulation information and the corresponding scene information thereof, the electronic device may sequentially perform the detection of the mismanipulation mark on the plurality of historical manipulation information to detect whether the mismanipulation mark exists in the plurality of historical manipulation information. If the electronic equipment detects that a certain historical manipulation information has the wrong manipulation mark, the electronic equipment determines that the historical manipulation information is the information corresponding to the wrong manipulation instruction received by the electronic equipment. The electronic device may determine the first data to be trained according to the historical manipulation information and the third scenario information corresponding to the historical manipulation information, and the historical manipulation information before the historical manipulation information and the first scenario information corresponding to the historical manipulation information.
For example, assume that the electronic device acquires 100 pieces of historical manipulation information and their corresponding scene information. Subsequently, the electronic device may sequentially perform the mis-manipulation flag detection on the 100 pieces of historical manipulation information. If the electronic device detects that the 10 th historical manipulation information has the wrong manipulation mark, the electronic device may determine the first data to be trained according to the 10 th historical manipulation information and the corresponding scenario information thereof, and a plurality of historical manipulation information before the 10 th historical manipulation information and the corresponding scenario information thereof, that is, the previous 9 historical manipulation information and the corresponding scenario information thereof.
The electronic device may then continue to perform the mis-manipulation flag detection on the remaining 90 historical manipulation information. If the electronic device detects that the 15 th historical manipulation information has the wrong manipulation mark, the electronic device may determine the first data to be trained according to the 15 th historical manipulation information and the corresponding scenario information thereof, and the 11 th historical manipulation information and the corresponding scenario information thereof through the 14 th historical manipulation information and the corresponding scenario information thereof. By analogy, as long as the electronic device detects that a certain historical control information has the wrong control mark, the first data to be trained is determined according to the historical control information and the corresponding scene information thereof, and the previous control information of the historical control information and the corresponding scene information thereof.
In 203, when the acquired preset number of pieces of historical manipulation information is information corresponding to the non-erroneous manipulation instruction received by the electronic device, the electronic device determines second data to be trained according to the preset number of pieces of historical manipulation information and the corresponding first scene information.
For example, when the electronic device performs the wrong manipulation flag detection on a plurality of pieces of historical manipulation information, there may be such a case. For example, no misoperation marker exists in the multiple continuous history operation information, that is, the multiple continuous history operation information is information corresponding to the non-misoperation instruction received by the electronic device. Then the electronic device may determine a quantity threshold. The number threshold may be 40, 50, 35, etc., and is not limited herein, subject to actual requirements.
For example, assume that the number threshold is 40. When the electronic device detects that no misoperation marker exists in the continuous 40 pieces of historical manipulation information, that is, the 40 pieces of historical manipulation information acquired by the electronic device are information corresponding to the non-misoperation instruction received by the electronic device, the second data to be trained can be determined according to the 40 pieces of historical manipulation information and the corresponding scene information. That is, when the electronic device detects that no incorrect manipulation flag exists in the 40 consecutive pieces of historical manipulation information, the second data to be trained can be determined according to the 40 pieces of historical manipulation information and the corresponding scene information.
For example, assume that there are 200 pieces of history manipulation information. The electronic device detects that no misoperation marker exists in the first 40 pieces of historical manipulation information, and then the second data to be trained can be determined according to the first 40 pieces of historical manipulation information and the corresponding scene information. If the electronic device detects that no mishandling mark exists from the 71 th historical handling information to the 110 th historical handling information, the electronic device may determine the second data to be trained according to the 71 th historical handling information and the corresponding scene information thereof to the 110 th historical handling information and the corresponding scene information thereof.
At 204, the electronic device determines a plurality of data to be trained according to the first data to be trained and the second data to be trained.
For example, after the electronic device obtains a plurality of first data to be trained and a plurality of second data to be trained, the plurality of data to be trained may be determined according to the plurality of first data to be trained and the plurality of second data to be trained.
For example, when the electronic device acquires 10 pieces of first data to be trained and 10 pieces of second data to be trained, the electronic device acquires 20 pieces of data to be trained.
In 205, the electronic device performs model training according to a plurality of data to be trained, and obtains an instruction prediction model for predicting whether the manipulation instruction is a wrong manipulation instruction.
For example, after the electronic device obtains a plurality of data to be trained, the plurality of data to be trained may be input into a preset model for training to obtain an instruction prediction model for predicting whether the control instruction is a wrong control instruction.
For example, the electronic device may train a plurality of data to be trained by using an SVM classification algorithm model to obtain a trained SVM classification algorithm model, where the trained SVM classification algorithm model is an instruction prediction model, and an output of the instruction prediction model includes two types, i.e., 0 and 1, where 1 represents a non-erroneous operation instruction and 0 represents an erroneous operation instruction.
Referring to fig. 4, in some embodiments, the process 201 may include:
2011. the electronic equipment receives a plurality of historical manipulation instructions in a time sequence.
For example, when the user clicks the display screen once, the electronic device receives a history manipulation instruction. The user clicks the display screen twice, and the electronic equipment receives a historical control instruction. The user carries out gesture operation on the display screen, and the electronic equipment receives a historical control instruction. The user carries out suspension touch on the display screen, and the electronic equipment receives a historical control instruction.
In this embodiment, the electronic device receives a plurality of history manipulation instructions in chronological order. And a plurality of historical manipulation instructions may be stored in chronological order.
2012. The electronic equipment acquires the position information corresponding to each historical manipulation instruction.
For example, each time the electronic device receives a history manipulation instruction, the electronic device may obtain the location information corresponding to the history manipulation instruction.
For example, when the user clicks the display screen once, the electronic device receives the history manipulation instruction. The position information corresponding to the history control instruction can be the position of the history control instruction relative to the display screen, that is, the coordinate of the position of the user clicking the display screen relative to the whole display screen. For example, assuming that a planar rectangular coordinate system is established with the center of the display screen as an origin, the position information corresponding to the historical manipulation instruction may be a coordinate of the position of the historical manipulation instruction in the planar rectangular coordinate system.
For example, when a user performs a gesture operation on the display screen, the electronic device receives a history manipulation instruction. The position information corresponding to the historical manipulation instruction may be a position of the historical manipulation instruction relative to the display screen. That is, coordinates of the position where the user performs the gesture operation on the display screen with respect to the entire display screen may be provided. For example, assuming that a plane rectangular coordinate system is established with the center of the display screen as an origin, the position information corresponding to the historical manipulation instruction may be coordinates of the position where the user performs the gesture operation in the plane rectangular coordinate system, and meanwhile, the position information corresponding to the historical manipulation instruction may further include a gesture size and a gesture direction.
2013. And the electronic equipment determines each historical control information according to the position information corresponding to each historical control instruction.
After the electronic device obtains the position information corresponding to each historical manipulation instruction, each historical manipulation information can be determined according to the position information corresponding to each historical manipulation instruction. For example, coordinates of the position of the historical manipulation instruction in a pre-established planar rectangular coordinate system are determined as the historical manipulation information. Coordinates of the position of the historical manipulation instruction in a pre-established planar rectangular coordinate system, and the size and the direction of the manipulation instruction can also be determined as historical manipulation information.
2014. The electronic equipment acquires first scene information corresponding to each historical manipulation information.
For example, if a user performs one-click operation on a display screen of the electronic device, the electronic device receives the historical manipulation instruction, and after receiving the historical manipulation instruction, the electronic device may acquire the position information of the historical manipulation instruction and determine the historical manipulation information according to the position information of the historical manipulation instruction. Then, the electronic device may obtain scene information corresponding to a scene where the electronic device is located when the historical manipulation instruction is received, and determine the scene information as first scene information corresponding to each piece of historical manipulation information.
2015. The electronic equipment determines a plurality of historical control information and corresponding first scene information according to each piece of historical control information and the corresponding first scene information of each piece of historical control information.
It can be understood that after the electronic device obtains the plurality of pieces of historical manipulation information and the first scene information corresponding to the plurality of pieces of historical manipulation information, the electronic device may determine the plurality of pieces of historical manipulation information and the first scene information corresponding to the plurality of pieces of historical manipulation information according to each piece of historical manipulation information and the first scene information corresponding to each piece of historical manipulation information.
Referring to fig. 5, in some embodiments, flow 2014 may include:
20141. the electronic equipment acquires a scene corresponding to each historical manipulation information to obtain a plurality of scenes.
20142. The electronic equipment normalizes the plurality of scenes to obtain a normalization value corresponding to each scene.
20143. The electronic equipment determines first scene information corresponding to each historical manipulation information according to the normalization value corresponding to each scene.
For example, when a user performs operations such as one click, multiple clicks, gesture operations, and the like on each pair of display screens of the electronic device, the electronic device receives a history control instruction. The electronic device can determine each historical manipulation information according to each historical manipulation instruction to obtain a plurality of historical manipulation information. When the electronic device receives a history control instruction, the electronic device can obtain a plurality of scenes of the scene where the electronic device is located when the history control instruction is received. The scene where the electronic device is located includes the scene where the electronic device is located and the state of the electronic device. The electronic device may be in the following scenario: mall, office, conference room, game room, video hall, movie theater, entertainment or music, etc. The state of the electronic device may be: pocket, user's hand, desktop, electronic device stand, running or walking, etc.
Suppose that the electronic device acquires 10 pieces of historical manipulation information in total, and the scenes corresponding to the 10 pieces of historical manipulation information may be the same or different. It is assumed that the 10 pieces of historical manipulation information respectively correspond to scenes including shopping malls, offices, meeting rooms, game halls, video and audio halls and movie theaters. Namely, the scene corresponding to each historical control information can be one of a shopping mall, an office, a conference room, a game hall, a video hall and a movie theater. It is assumed that the 10 historical manipulation information respectively correspond to states included in a pocket, a user's hand, a table, an electronic device stand, running, or walking. That is, the state corresponding to each historical manipulation information may be one of a pocket, a user's hand, a desktop, an electronic device stand, running, or walking.
That is, the plurality of scenes acquired by the electronic device include a plurality of scenes and a plurality of states. Assume a number of scenarios as: mall 0, office 1, conference room 2, game room 3, video hall 4, and movie theater 5. The plurality of states are: pocket 0, user's hand 1, table 2, cell phone stand 3, running 4, or walking 5.
The electronic device can perform normalization processing on a plurality of scenes to obtain a normalization value corresponding to each scene. For example, the electronic device may perform normalization processing on a mall, an office, a conference room, a game hall, and a video hall. Namely, the marketplaces, offices, meeting rooms, game halls and video and audio halls are normalized to the interval [0,1 ]. For example, the normalized value corresponding to the mall may be 0, the normalized value corresponding to the office may be 0.2, the normalized value corresponding to the conference room may be 0.4, the normalized value corresponding to the game arcade may be 0.6, the normalized value corresponding to the video and audio arcade may be 0.8, and the normalized value corresponding to the movie theater may be 1. The electronic device may normalize the pocket, user hand, desktop, running, or walking. That is, the pocket, the user's hand, the desktop, the running, or the walking are normalized to the interval [0,1], for example, the normalized value corresponding to the pocket may be 0, the normalized value corresponding to the user's hand may be 0.2, the normalized value corresponding to the desktop may be 0.4, the normalized value corresponding to the mobile phone cradle may be 0.6, the normalized value corresponding to the running may be 0.8, and the normalized value corresponding to the walking may be 1.
For example, a scene corresponding to a certain history operation information is an office, and a corresponding state is a desktop. Then its corresponding scene information is 0.2, 0.4.
In some embodiments, flow 2012 may include:
and the electronic equipment acquires the position information and the control mode corresponding to each historical control instruction.
The process 2013 may include: and the electronic equipment determines each historical control information according to the position information and the control mode corresponding to each historical control instruction.
And the electronic equipment can acquire the position information and the control mode corresponding to the historical control instruction every time the electronic equipment receives one historical control instruction.
For example, when the user clicks the display screen once, the electronic device receives the history manipulation instruction. The position information corresponding to the historical manipulation instruction may be a position of the historical manipulation instruction relative to the display screen. I.e. coordinates of the position where the user clicks the display screen with respect to the whole display screen. For example, assuming that a planar rectangular coordinate system is established with the center of the display screen as an origin, the position information corresponding to the historical manipulation instruction may be a coordinate of the position of the historical manipulation instruction in the planar rectangular coordinate system. Meanwhile, the electronic device may determine that the manipulation manner is a single click.
For example, when the user clicks the same position of the display screen for multiple times, the electronic device receives the historical manipulation instruction. The position information corresponding to the historical manipulation instruction may be a position of the historical manipulation instruction relative to the display screen. I.e. coordinates of the position where the user clicks the display screen with respect to the whole display screen. For example, assuming that a planar rectangular coordinate system is established with the center of the display screen as an origin, the position information corresponding to the historical manipulation instruction may be a coordinate of the position of the historical manipulation instruction in the planar rectangular coordinate system. Meanwhile, the electronic device may determine that the manipulation manner is multiple clicks.
For example, when a user performs a gesture operation on the display screen, the electronic device receives a history manipulation instruction. The position information corresponding to the historical manipulation instruction may be a position of the historical manipulation instruction relative to the display screen. That is, coordinates of the position where the user performs the gesture operation on the display screen with respect to the entire display screen may be provided. For example, assuming that a plane rectangular coordinate system is established with the center of the display screen as an origin, the position information corresponding to the historical manipulation instruction may be coordinates of the position where the user performs the gesture operation in the plane rectangular coordinate system, and may also include a gesture size and a gesture direction. Meanwhile, the electronic device may determine that the manipulation manner is gesture manipulation.
In order to unify the historical manipulation information, the calculation is convenient. Can adopt [ x ]i,yi,zi,wi,ki]Records historical manipulation information. Wherein x isi,yiCoordinates of the location of the historical manipulation instruction. z is a radical ofi,wiIndicating the size and direction of the historical manipulation instruction. k is a radical ofiRepresenting the manner of manipulation.
When the history control instruction received by the electronic equipment is an instruction corresponding to clicking the display screen once, the history control information determined according to the history control instruction can be recorded as [ x ]1,y1,z1,w1,k1]. Suppose that a planar rectangular coordinate system, x, is established with the center of the display screen as the origin1,y1The abscissa and the ordinate of the historical manipulation command in the rectangular plane coordinate system can be taken as the reference. z is a radical of1And w1May be recorded as 0. k is a radical of1Indicating that the manipulation manner is a single click.
When the history manipulation instruction received by the electronic device is an instruction corresponding to the gesture operation, the history manipulation information determined according to the history manipulation instruction may be recorded as [ x [ ]2,y2,z2,w2,k2]. Suppose that a planar rectangular coordinate system, x, is established with the center of the display screen as the origin2,y2The abscissa and the ordinate of the historical manipulation command in the rectangular plane coordinate system can be taken as the reference. z is a radical of2And w2The size and direction of the gesture may be represented. k is a radical of2The representation control mode is gesture control.
It is to be understood that the first data to be trained may be represented as follows: { [ x ]1,y1,z1,w1,k1],[0.2,0.4],[x2,y2,z2,w2,k2],[0.4,0.4],[x3,y3,z3,w3,k3],[0.4,0.6],…[xn-1,yn-1,zn-1,wn-1,kn-1],[0.6,0.8],[xn,yn,zn,wn,kn],[0.4,0.8]}. Wherein, [ x ]n,yn,zn,wn,kn],[0.4,0.8]Historical control information corresponding to the misoperation instruction and scene information corresponding to the misoperation instruction. [ x ] of1,y1,z1,w1,k1],[0.2,0.4],[x2,y2,z2,w2,k2],[0.4,0.4],[x3,y3,z3,w3,k3],[0.4,0.6],…[xn-1,yn-1,zn-1,wn-1,kn-1],[0.6,0.8]The historical control information corresponding to the non-misoperation instruction and the scene information corresponding to the non-misoperation instruction are obtained.
Alternatively, the second data to be trained may be represented as follows: { [ x ]1,y1,z1,w1,k1],[0.2,0.4],[x2,y2,z2,w2,k2],[0.4,0.4],[x3,y3,z3,w3,k3],[0.4,0.6],…[xn-1,yn-1,zn-1,wn-1,kn-1],[0.6,0.8],[xn,yn,zn,wn,kn],[0.4,0.8]}. Wherein, [ x ]1,y1,z1,w1,k1],[0.2,0.4],[x2,y2,z2,w2,k2],[0.4,0.4],[x3,y3,z3,w3,k3],[0.4,0.6],…[xn-1,yn-1,zn-1,wn-1,kn-1],[0.6,0.8],[xn,yn,zn,wn,kn],[0.4,0.8]The historical control information corresponding to the non-misoperation instruction and the scene information corresponding to the non-misoperation instruction can be used.
In some embodiments, in an electronic device with a panorama sensing architecture, when the electronic device receives a historical manipulation instruction, the electronic device may utilize an information sensing layer to obtain a plurality of original data corresponding to the historical manipulation instruction, where the plurality of original data includes a plurality of historical manipulation information and data corresponding to first scene information corresponding to the historical manipulation information; then, the electronic device may process the multiple pieces of original data acquired by the information sensing layer by using the data processing layer, for example, the multiple pieces of original data acquired by the information sensing layer may be cleaned to remove invalid data and duplicate data; then, the electronic device may perform feature extraction on the data processed by the data processing layer by using a feature extraction layer to extract features included in the data, for example, the extracted features may reflect historical manipulation information and scene information corresponding to the historical manipulation information; then, determining data to be trained according to the plurality of historical control information and the corresponding scene information, and then training the data to be trained by utilizing the scene modeling layer to obtain an instruction prediction model; and finally, the intelligent service layer predicts whether the current operation of the user is misoperation according to the instruction prediction model obtained by the scene modeling layer. For example, when a user executes a click operation, the electronic device receives a current control instruction, then the electronic device determines data to be identified according to the current control instruction, and then the intelligent service layer inputs the data to be identified into the instruction prediction model, so as to predict whether the current control instruction is a wrong control instruction. When the current control instruction is predicted to be the misoperation instruction, the current operation of the user is represented as misoperation, so that the electronic equipment does not respond to the current operation of the user; when the current control instruction is predicted to be the non-misoperation instruction, the current operation of the user is represented to be the non-misoperation, namely the normal operation, so that the electronic equipment can respond to the current operation of the user.
Referring to fig. 6, fig. 6 is a flowchart illustrating an operation processing method according to an embodiment of the present application. The flow of the operation processing method may include:
in 301, data to be identified is obtained, where the data to be identified includes current manipulation information and second scene information corresponding to the current manipulation information, the current manipulation information is information corresponding to a current manipulation instruction received by the electronic device, and the second scene information is scene information corresponding to a scene where the electronic device is located when the electronic device receives the current manipulation instruction.
For example, when the user clicks the display screen, the electronic device receives the current manipulation instruction. Then, the electronic device may determine the current manipulation information and the corresponding second scenario information according to the current manipulation instruction. The electronic device may determine the data to be identified according to the current manipulation information and the second scene information corresponding to the current manipulation information.
In some embodiments, when the user clicks the display screen, the electronic device receives the current manipulation instruction, and then the electronic device may obtain the position information corresponding to the current manipulation instruction. The position information corresponding to the current manipulation instruction may be a position of the current manipulation instruction relative to the display screen. For example, the current manipulation instruction may be coordinates of a position where the user clicks the display screen with respect to the entire display screen. For example, assuming that a planar rectangular coordinate system is established with the center of the display screen as an origin, the position information corresponding to the current manipulation instruction may be a coordinate of the position of the current manipulation instruction in the planar rectangular coordinate system.
For example, when a user performs a gesture operation on the display screen, the electronic device receives a current manipulation instruction. The position information corresponding to the current control command may be a position of the current control command relative to the display screen. That is, coordinates of the position where the user performs the gesture operation on the display screen with respect to the entire display screen may be provided. For example, assuming that a plane rectangular coordinate system is established with the center of the display screen as an origin, the position information corresponding to the current manipulation instruction may be a coordinate of the position where the user performs the gesture operation in the plane rectangular coordinate system, and meanwhile, the position information corresponding to the current manipulation instruction may further include a gesture size and a gesture direction.
After the electronic device acquires the position information corresponding to the current control instruction, the electronic device can determine the current control information according to the position information corresponding to the current control instruction. For example, the coordinates of the position of the current manipulation instruction in a pre-established planar rectangular coordinate system are determined as the current manipulation information. The coordinates of the position of the current manipulation instruction in a pre-established planar rectangular coordinate system, and the size and direction of the manipulation instruction may also be determined as the current manipulation information.
Then, the electronic device may obtain scene information corresponding to a scene where the electronic device is located when the current manipulation instruction is received, and determine the scene information as second scene information corresponding to the current manipulation information. Therefore, the electronic equipment can determine the data to be identified according to the current control information and the second scene information.
In 302, data to be recognized is input into a pre-trained instruction prediction model, and whether a current manipulation instruction received by the electronic device is a wrong manipulation instruction is predicted.
It should be noted that, in this embodiment, an instruction prediction model is also trained in advance, and the instruction prediction model is obtained by performing model training according to a plurality of data to be trained. For example, when the electronic device trains the instruction prediction model, the SVM classification algorithm model may be used to train a plurality of data to be trained to obtain the trained SVM classification algorithm model, that is, the instruction prediction model, wherein the output of the instruction prediction model includes two types, i.e., 0 and 1, where 1 represents a non-faulty operation instruction and 0 represents a faulty operation instruction.
It should be noted that the above SVM classification algorithm model is only an example and is not intended to limit the present application. In this embodiment, what kind of classification algorithm is adopted can be set by a person skilled in the art according to actual needs, and besides the above SVM classification algorithm model, the classification algorithm may also include, but is not limited to, a naive bayes classification algorithm, a support vector machine algorithm, a KNN algorithm, a neural network algorithm, and the like.
After the electronic equipment acquires the data to be recognized, the data to be recognized can be input into a pre-trained instruction prediction model, so that whether the current control instruction received by the electronic equipment is a wrong control instruction or not is predicted. The electronic equipment can make different responses according to whether the received current control instruction is the misoperation instruction or not.
For example, the electronic device may input data to be recognized into a pre-trained instruction prediction model, where an output of the pre-trained instruction prediction model includes two types, i.e., 0 and 1, where 1 represents a non-faulty manipulation instruction and 0 represents a faulty manipulation instruction. Assuming that the electronic equipment inputs the data to be identified corresponding to the current control instruction into the pre-trained instruction prediction model, and the obtained output is '0', the current control instruction received by the electronic equipment is a wrong control instruction; and if the electronic equipment inputs the data to be recognized corresponding to the current control instruction into the pre-trained instruction prediction model, and the obtained output is '1', the current control instruction received by the electronic equipment is a non-error control instruction.
As can be seen from the above, in the embodiment of the application, the electronic device only needs to train to obtain the instruction prediction model for predicting whether the operation instruction is the wrong operation instruction, and then when the data to be identified is obtained, the instruction prediction model obtained through training can be used for predicting to predict whether the current operation instruction is the wrong operation instruction, so as to perform different responses according to whether the current operation instruction is the wrong operation instruction.
If the electronic equipment judges that the received current control instruction is the wrong control instruction, the electronic equipment does not respond to the current control instruction, and if the electronic equipment judges that the received current control instruction is the non-wrong control instruction, the electronic equipment responds to the current control instruction.
For example, when a user's hand inadvertently touches an interface of an application, the electronic device receives a current manipulation instruction. The electronic device may determine that the current manipulation instruction is a wrong manipulation instruction through the operation processing method provided in this embodiment, and the electronic device does not respond to the current manipulation instruction. I.e. the electronic device does not enter the interface of the application. When a user clicks an interface of an application, the electronic device receives a current control instruction. The electronic device may determine that the current manipulation instruction is a non-erroneous manipulation instruction by using the operation processing method provided in this embodiment, and then the electronic device responds to the current manipulation instruction. I.e. the interface where the electronic device enters the application.
In some embodiments, when the electronic device determines that the received current manipulation instruction is a wrong manipulation instruction, the electronic device may generate and display a prompt message to prompt the user to input a correct manipulation instruction. Wherein, the prompt message can also adopt a voice broadcast mode, and the like.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a model training device according to an embodiment of the present application. The model training apparatus may include: an acquisition module 401 and a training module 402.
The obtaining module 401 is configured to obtain a plurality of data to be trained, where the data to be trained includes historical control information and first scene information corresponding to the historical control information, the historical control information is information corresponding to a historical control instruction received by the electronic device, the historical control instruction is a wrong control instruction or a non-wrong control instruction, and the first scene information is scene information corresponding to a scene where the electronic device is located when the electronic device receives the historical control instruction.
A training module 402, configured to perform model training according to the multiple data to be trained, to obtain an instruction prediction model for predicting whether the control instruction is a faulty control instruction.
In some embodiments, the obtaining module 401 may be configured to: acquiring a plurality of historical control information and corresponding first scene information according to a time sequence; when the acquired first historical manipulation information is information corresponding to a wrong manipulation instruction received by the electronic equipment, determining first data to be trained according to the first historical manipulation information and third scene information corresponding to the first historical manipulation information, and historical manipulation information before the first historical manipulation information and first scene information corresponding to the first historical manipulation information, wherein the first historical manipulation information is included in the plurality of historical manipulation information; when the acquired historical control information with the preset number is information corresponding to the non-error control instruction received by the electronic equipment, determining second data to be trained according to the historical control information with the preset number and the corresponding first scene information; and determining a plurality of data to be trained according to the first data to be trained and the second data to be trained.
In some embodiments, the obtaining module 401 may be configured to: receiving a plurality of historical manipulation instructions according to a time sequence; acquiring position information corresponding to each historical control instruction; determining each historical control information according to the position information corresponding to each historical control instruction; acquiring first scene information corresponding to each historical control information; and determining the plurality of historical control information and the corresponding first scene information according to each piece of historical control information and the corresponding first scene information of each piece of historical control information.
In some embodiments, the obtaining module 401 may be configured to: acquiring a scene corresponding to each historical control information to obtain a plurality of scenes; carrying out normalization processing on the plurality of scenes to obtain a normalization value corresponding to each scene; and determining first scene information corresponding to each historical control information according to the normalization value corresponding to each scene.
In some embodiments, the obtaining module 401 may be configured to: acquiring position information and an operation and control mode corresponding to each historical operation and control instruction; and determining each historical control information according to the position information and the control mode corresponding to each historical control instruction.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an operation processing device according to an embodiment of the present disclosure. The model training apparatus may include: an obtaining module 501 and a predicting module 502.
The obtaining module 501 is configured to obtain data to be identified, where the data to be identified includes current control information and second scene information corresponding to the current control information, the current control information is information corresponding to a current control instruction received by the electronic device, and the second scene information is scene information corresponding to a scene where the electronic device is located when the electronic device receives the current control instruction.
The prediction module 502 is configured to input the data to be recognized into a pre-trained instruction prediction model, and predict whether a current operation instruction received by the electronic device is an incorrect operation instruction;
and the instruction prediction model is obtained by performing model training according to a plurality of data to be trained.
An embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, which, when executed on a computer, causes the computer to execute a procedure in a model training method as provided in this embodiment, or causes the computer to execute a procedure in an operation processing method as provided in this embodiment.
The embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and the processor is configured to execute a procedure in the model training method provided in the present embodiment or execute a procedure in the operation processing method provided in the present embodiment by calling the computer program stored in the memory.
For example, the electronic device may be a smart phone, a tablet computer, a game device, an AR (augmented reality) device, an automobile, a data storage device, an audio playing device, a video playing device, a notebook computer, a desktop computing device, a wearable device such as an electronic watch, an electronic glasses, an electronic helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
The electronic device 600 may include components such as a processor 601 and memory 602. The processor 601 is electrically connected to the memory 602. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 9 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The processor 601 is a control center of the electronic device 600, connects various parts of the whole electronic device by using various interfaces and lines, and performs various functions of the electronic device and processes data by running or calling a computer program stored in the memory 602 and calling data stored in the memory 602, thereby performing overall monitoring of the electronic device.
In this embodiment, the processor 601 in the electronic device 600 loads instructions corresponding to one or more processes of the computer program into the memory 602 according to the following procedures, and the processor 601 runs the computer program stored in the memory 602, so as to implement various functions:
acquiring a plurality of data to be trained, wherein the data to be trained comprises historical control information and first scene information corresponding to the historical control information, the historical control information is information corresponding to a historical control instruction received by the electronic equipment, the historical control instruction is a wrong control instruction or a non-wrong control instruction, and the first scene information is scene information corresponding to a scene where the electronic equipment is located when the electronic equipment receives the historical control instruction;
and performing model training according to the plurality of data to be trained to obtain an instruction prediction model for predicting whether the control instruction is a misoperation instruction.
Alternatively, the processor 601 in the electronic device 600 loads instructions corresponding to one or more processes of the computer program into the memory 602 according to the following procedures, and the processor 601 executes the computer program stored in the memory 602, thereby implementing various functions:
acquiring data to be identified, wherein the data to be identified comprises current control information and second scene information corresponding to the current control information, the current control information is information corresponding to a current control instruction received by the electronic equipment, and the second scene information is scene information corresponding to a scene where the electronic equipment is located when the electronic equipment receives the current control instruction;
inputting the data to be identified into a pre-trained instruction prediction model, and predicting whether a current control instruction received by the electronic equipment is a wrong control instruction;
and the instruction prediction model is obtained by performing model training according to a plurality of data to be trained.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Wherein, electronic equipment 700 includes: a processor 701, a memory 702, a display 703, a control circuit 704, an input unit 705, a sensor 706, and a power supply 707. The processor 701 is electrically connected to the display screen 703, the control circuit 704, the input unit 705, the sensor 706, and the power source 707.
The display screen 703 may be used to display information entered by or provided to the user as well as various graphical user interfaces of the electronic device, which may be comprised of images, text, icons, video, and any combination thereof.
The control circuit 704 is electrically connected to the display screen 703 and is configured to control the display screen 703 to display information.
The input unit 705 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint), and generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control. The input unit 705 may include a fingerprint recognition module.
The sensor 706 is used to collect information of the electronic device itself or information of the user or external environment information. For example, the sensor 706 may include a plurality of sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, an attitude sensor, a barometer, a heart rate sensor, and the like.
The power supply 707 is used to power the various components of the electronic device 700. In some embodiments, the power supply 707 may be logically coupled to the processor 701 through a power management system to manage charging, discharging, and power consumption management functions through the power management system.
Although not shown in fig. 10, the electronic device 700 may further include a camera, a bluetooth module, and the like, which are not described in detail herein.
In this embodiment, the processor 701 in the electronic device loads the executable code corresponding to the process of one or more application programs into the memory 702 according to the following instructions, and the processor 701 runs the application program stored in the memory 702, thereby implementing the flow:
acquiring a plurality of data to be trained, wherein the data to be trained comprises historical control information and first scene information corresponding to the historical control information, the historical control information is information corresponding to a historical control instruction received by the electronic equipment, the historical control instruction is a wrong control instruction or a non-wrong control instruction, and the first scene information is scene information corresponding to a scene where the electronic equipment is located when the electronic equipment receives the historical control instruction;
and performing model training according to the plurality of data to be trained to obtain an instruction prediction model for predicting whether the control instruction is a misoperation instruction.
In some embodiments, when the processor 701 executes the procedure of acquiring a plurality of data to be trained, it may perform: acquiring a plurality of historical control information and corresponding first scene information according to a time sequence; when the acquired first historical manipulation information is information corresponding to a wrong manipulation instruction received by the electronic equipment, determining first data to be trained according to the first historical manipulation information and third scene information corresponding to the first historical manipulation information, and historical manipulation information before the first historical manipulation information and first scene information corresponding to the first historical manipulation information, wherein the first historical manipulation information is included in the plurality of historical manipulation information; when the acquired historical control information with the preset number is information corresponding to the non-error control instruction received by the electronic equipment, determining second data to be trained according to the historical control information with the preset number and the corresponding first scene information; and determining a plurality of data to be trained according to the first data to be trained and the second data to be trained.
In some embodiments, when the processor 701 executes the process of obtaining the plurality of historical manipulation information and the corresponding first scenario information in time sequence, it may execute: receiving a plurality of historical manipulation instructions according to a time sequence; acquiring position information corresponding to each historical control instruction; determining each historical control information according to the position information corresponding to each historical control instruction; acquiring first scene information corresponding to each historical control information; and determining the plurality of historical control information and the corresponding first scene information according to each piece of historical control information and the corresponding first scene information of each piece of historical control information.
In some embodiments, when the processor 701 executes the process of acquiring the first scenario information corresponding to each piece of historical manipulation information, it may perform: acquiring a scene corresponding to each historical control information to obtain a plurality of scenes; carrying out normalization processing on the plurality of scenes to obtain a normalization value corresponding to each scene; and determining first scene information corresponding to each historical control information according to the normalization value corresponding to each scene.
In some embodiments, when the processor 701 executes the process of obtaining the location information corresponding to each historical manipulation instruction, the following steps may be performed: acquiring position information and an operation and control mode corresponding to each historical operation and control instruction; determining each historical manipulation information according to the position information corresponding to each historical manipulation instruction, including: and determining each historical control information according to the position information and the control mode corresponding to each historical control instruction.
Alternatively, the processor 701 in the electronic device may load executable codes corresponding to processes of one or more application programs into the memory 702 according to the following instructions, and the processor 701 executes the application program stored in the memory 702, thereby implementing the flow:
acquiring data to be identified, wherein the data to be identified comprises current control information and second scene information corresponding to the current control information, the current control information is information corresponding to a current control instruction received by the electronic equipment, and the second scene information is scene information corresponding to a scene where the electronic equipment is located when the electronic equipment receives the current control instruction;
inputting the data to be identified into a pre-trained instruction prediction model, and predicting whether a current control instruction received by the electronic equipment is a wrong control instruction;
and the instruction prediction model is obtained by performing model training according to a plurality of data to be trained.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the model training method/operation processing method, and are not described herein again.
The model training device/operation processing device provided in the embodiment of the present application and the model training method/operation processing method in the above embodiments belong to the same concept, and any one of the methods provided in the embodiment of the model training method/operation processing method may be run on the model training device/operation processing device, and the specific implementation process thereof is described in the embodiment of the model training method/operation processing method, and is not described herein again.
It should be noted that, for the model training method/operation processing method described in the embodiment of the present application, it can be understood by those skilled in the art that all or part of the process for implementing the model training method/operation processing method described in the embodiment of the present application can be implemented by controlling the relevant hardware through a computer program, where the computer program can be stored in a computer-readable storage medium, such as a memory, and executed by at least one processor, and the process of the embodiment of the model training method/operation processing method can be included in the execution process. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
For the model training device/operation processing device according to the embodiment of the present application, each functional module may be integrated into one processing chip, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The model training method, the operation processing method, the apparatus, the storage medium, and the device provided in the embodiments of the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understanding the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A model training method is applied to electronic equipment and is characterized by comprising the following steps:
acquiring a plurality of data to be trained, wherein the data to be trained comprises historical control information and first scene information corresponding to the historical control information, the historical control information is information corresponding to a historical control instruction received by the electronic equipment, the historical control instruction is a wrong control instruction or a non-wrong control instruction, and the first scene information is scene information corresponding to a scene where the electronic equipment is located when the electronic equipment receives the historical control instruction;
and performing model training according to the plurality of data to be trained to obtain an instruction prediction model for predicting whether the control instruction is a misoperation instruction.
2. The model training method of claim 1, wherein the obtaining a plurality of data to be trained comprises:
acquiring a plurality of historical control information and corresponding first scene information according to a time sequence;
when the acquired first historical manipulation information is information corresponding to a wrong manipulation instruction received by the electronic equipment, determining first data to be trained according to the first historical manipulation information and third scene information corresponding to the first historical manipulation information, and historical manipulation information before the first historical manipulation information and first scene information corresponding to the first historical manipulation information, wherein the first historical manipulation information is included in the plurality of historical manipulation information;
when the acquired historical control information with the preset number is information corresponding to the non-error control instruction received by the electronic equipment, determining second data to be trained according to the historical control information with the preset number and the corresponding first scene information;
and determining a plurality of data to be trained according to the first data to be trained and the second data to be trained.
3. The model training method according to claim 2, wherein the obtaining of the plurality of historical manipulation information and the corresponding first scenario information according to the time sequence comprises:
receiving a plurality of historical manipulation instructions according to a time sequence;
acquiring position information corresponding to each historical control instruction;
determining each historical control information according to the position information corresponding to each historical control instruction;
acquiring first scene information corresponding to each historical control information;
and determining the plurality of historical control information and the corresponding first scene information according to each piece of historical control information and the corresponding first scene information of each piece of historical control information.
4. The model training method according to claim 3, wherein the obtaining of the first scenario information corresponding to each piece of historical manipulation information includes:
acquiring a scene corresponding to each historical control information to obtain a plurality of scenes;
carrying out normalization processing on the plurality of scenes to obtain a normalization value corresponding to each scene;
and determining first scene information corresponding to each historical control information according to the normalization value corresponding to each scene.
5. The model training method according to claim 3, wherein the obtaining of the position information corresponding to each historical manipulation instruction comprises:
acquiring position information and an operation and control mode corresponding to each historical operation and control instruction;
determining each historical manipulation information according to the position information corresponding to each historical manipulation instruction, including:
and determining each historical control information according to the position information and the control mode corresponding to each historical control instruction.
6. An operation processing method applied to an electronic device, the method comprising:
acquiring data to be identified, wherein the data to be identified comprises current control information and second scene information corresponding to the current control information, the current control information is information corresponding to a current control instruction received by the electronic equipment, and the second scene information is scene information corresponding to a scene where the electronic equipment is located when the electronic equipment receives the current control instruction;
inputting the data to be identified into a pre-trained instruction prediction model, and predicting whether a current control instruction received by the electronic equipment is a wrong control instruction;
and the instruction prediction model is obtained by performing model training according to a plurality of data to be trained.
7. A model training device applied to electronic equipment is characterized by comprising:
the training device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a plurality of data to be trained, the data to be trained comprises historical control information and first scene information corresponding to the historical control information, the historical control information is information corresponding to a historical control instruction received by the electronic equipment, the historical control instruction is a wrong control instruction or a non-wrong control instruction, and the first scene information is scene information corresponding to a scene where the electronic equipment is located when the electronic equipment receives the historical control instruction;
and the training module is used for carrying out model training according to the plurality of data to be trained to obtain an instruction prediction model for predicting whether the control instruction is the misoperation instruction or not.
8. An operation processing apparatus applied to an electronic device, comprising:
the electronic equipment comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring data to be identified, the data to be identified comprises current control information and second scene information corresponding to the current control information, the current control information is information corresponding to a current control instruction received by the electronic equipment, and the second scene information is scene information corresponding to a scene where the electronic equipment is located when the electronic equipment receives the current control instruction;
the prediction module is used for inputting the data to be recognized into a pre-trained instruction prediction model and predicting whether a current control instruction received by the electronic equipment is a wrong control instruction or not;
and the instruction prediction model is obtained by performing model training according to a plurality of data to be trained.
9. A storage medium having stored therein a computer program which, when run on a computer, causes the computer to execute the model training method of any one of claims 1 to 5 or causes the computer to execute the operation processing method of claim 6.
10. An electronic device, characterized in that the electronic device comprises a processor and a memory, wherein a computer program is stored in the memory, and the processor is configured to execute the model training method according to any one of claims 1 to 5 or the operation processing method according to claim 6 by calling the computer program stored in the memory.
CN201910282473.8A 2019-04-09 2019-04-09 Model training method, operation processing method, device, storage medium and equipment Withdrawn CN111796701A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910282473.8A CN111796701A (en) 2019-04-09 2019-04-09 Model training method, operation processing method, device, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910282473.8A CN111796701A (en) 2019-04-09 2019-04-09 Model training method, operation processing method, device, storage medium and equipment

Publications (1)

Publication Number Publication Date
CN111796701A true CN111796701A (en) 2020-10-20

Family

ID=72805344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910282473.8A Withdrawn CN111796701A (en) 2019-04-09 2019-04-09 Model training method, operation processing method, device, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN111796701A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112514441A (en) * 2020-11-02 2021-03-16 北京小米移动软件有限公司 Communication method, communication device and storage medium
WO2023035980A1 (en) * 2021-09-13 2023-03-16 上海微创医疗机器人(集团)股份有限公司 Storage medium, robotic system, and computer device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105739760A (en) * 2016-01-22 2016-07-06 北京小米移动软件有限公司 Control method and device for anti-false touch mode
US20170052625A1 (en) * 2015-08-20 2017-02-23 International Business Machines Corporation Wet finger tracking on capacitive touchscreens
CN107958273A (en) * 2017-12-15 2018-04-24 北京小米移动软件有限公司 volume adjusting method, device and storage medium
CN108701043A (en) * 2017-06-05 2018-10-23 华为技术有限公司 A kind of processing method and processing device of display
CN108733427A (en) * 2018-03-13 2018-11-02 广东欧珀移动通信有限公司 Configuration method, device, terminal and the storage medium of input module

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170052625A1 (en) * 2015-08-20 2017-02-23 International Business Machines Corporation Wet finger tracking on capacitive touchscreens
CN105739760A (en) * 2016-01-22 2016-07-06 北京小米移动软件有限公司 Control method and device for anti-false touch mode
CN108701043A (en) * 2017-06-05 2018-10-23 华为技术有限公司 A kind of processing method and processing device of display
CN107958273A (en) * 2017-12-15 2018-04-24 北京小米移动软件有限公司 volume adjusting method, device and storage medium
CN108733427A (en) * 2018-03-13 2018-11-02 广东欧珀移动通信有限公司 Configuration method, device, terminal and the storage medium of input module

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112514441A (en) * 2020-11-02 2021-03-16 北京小米移动软件有限公司 Communication method, communication device and storage medium
WO2022088181A1 (en) * 2020-11-02 2022-05-05 北京小米移动软件有限公司 Communication method, communication apparatus and storage medium
WO2023035980A1 (en) * 2021-09-13 2023-03-16 上海微创医疗机器人(集团)股份有限公司 Storage medium, robotic system, and computer device

Similar Documents

Publication Publication Date Title
US9261995B2 (en) Apparatus, method, and computer readable recording medium for selecting object by using multi-touch with related reference point
US10027737B2 (en) Method, apparatus and computer readable medium for activating functionality of an electronic device based on the presence of a user staring at the electronic device
CN111243668B (en) Method and device for detecting molecule binding site, electronic device and storage medium
US9224064B2 (en) Electronic device, electronic device operating method, and computer readable recording medium recording the method
CN111737573A (en) Resource recommendation method, device, equipment and storage medium
US20140232748A1 (en) Device, method and computer readable recording medium for operating the same
EP3413548B1 (en) Method, apparatus, and recording medium for interworking with external terminal
CN104516499A (en) Apparatus and method of using events for user interface
CN111796925A (en) Method and device for screening algorithm model, storage medium and electronic equipment
CN111797854A (en) Scene model establishing method and device, storage medium and electronic equipment
CN111796701A (en) Model training method, operation processing method, device, storage medium and equipment
CN111800445B (en) Message pushing method and device, storage medium and electronic equipment
CN111797873A (en) Scene recognition method and device, storage medium and electronic equipment
KR101995799B1 (en) Place recognizing device and method for providing context awareness service
CN114360047A (en) Hand-lifting gesture recognition method and device, electronic equipment and storage medium
CN111797867A (en) System resource optimization method and device, storage medium and electronic equipment
CN115291786A (en) False touch judgment method and device based on machine learning and storage medium
KR20140103043A (en) Electronic device, method and computer readable recording medium for operating the electronic device
CN111796663B (en) Scene recognition model updating method and device, storage medium and electronic equipment
CN111797656B (en) Face key point detection method and device, storage medium and electronic equipment
CN111813639B (en) Method and device for evaluating equipment operation level, storage medium and electronic equipment
CN111796924A (en) Service processing method, device, storage medium and electronic equipment
CN111797878A (en) Data processing method, data processing device, storage medium and electronic equipment
CN111797391A (en) High-risk process processing method and device, storage medium and electronic equipment
CN111796883B (en) Equipment control method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20201020