CN110009004B - Image data processing method, computer device, and storage medium - Google Patents

Image data processing method, computer device, and storage medium Download PDF

Info

Publication number
CN110009004B
CN110009004B CN201910195309.3A CN201910195309A CN110009004B CN 110009004 B CN110009004 B CN 110009004B CN 201910195309 A CN201910195309 A CN 201910195309A CN 110009004 B CN110009004 B CN 110009004B
Authority
CN
China
Prior art keywords
identified
image area
recognition result
current
current image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910195309.3A
Other languages
Chinese (zh)
Other versions
CN110009004A (en
Inventor
廖松茂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201910195309.3A priority Critical patent/CN110009004B/en
Publication of CN110009004A publication Critical patent/CN110009004A/en
Application granted granted Critical
Publication of CN110009004B publication Critical patent/CN110009004B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Image Analysis (AREA)

Abstract

The present application relates to an image data processing method, a computer device, and a storage medium, the method comprising: and identifying candidate identification results in the current image area to be identified, calculating the difference degree between the image features of the current image area to be identified and the image features of the candidate identification results, and taking the candidate identification results as current target identification results when the difference degree is larger than the preset difference degree. By judging the difference degree between the image area and the identification area, the identification accuracy is determined, and when the difference degree is larger than the preset difference degree, the influence of the background on the identification result is smaller, the accuracy of the identification result is higher, and therefore the identification accuracy is improved.

Description

Image data processing method, computer device, and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image data processing method, a computer device, and a storage medium.
Background
With the development of computer technology, image processing technology is also developed, and at present, the image processing technology is widely applied to various fields, especially the application of image recognition technology, and the recognition accuracy is low due to the complex background information in the image recognition process.
Disclosure of Invention
In order to solve the technical problems described above, the present application provides an image data processing method, a computer device, and a storage medium.
In a first aspect, the present application provides an image data processing method, including:
identifying candidate identification results in the current image area to be identified;
calculating the difference degree between the image features of the current image area to be identified and the image features of the candidate identification result;
and when the difference degree is larger than the preset difference degree, taking the candidate recognition result as the current target recognition result.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of:
identifying candidate identification results in the current image area to be identified;
calculating the difference degree between the image features of the current image area to be identified and the image features of the candidate identification result;
and when the difference degree is larger than the preset difference degree, taking the candidate recognition result as the current target recognition result.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Identifying candidate identification results in the current image area to be identified;
calculating the difference degree between the image features of the current image area to be identified and the image features of the candidate identification result;
and when the difference degree is larger than the preset difference degree, taking the candidate recognition result as the current target recognition result.
The image data processing method, the computer device and the storage medium, the method includes: and identifying candidate identification results in the current image area to be identified, calculating the difference degree between the image features of the current image area to be identified and the image features of the candidate identification results, and taking the candidate identification results as current target identification results when the difference degree is larger than the preset difference degree. By judging the difference degree between the image area and the identification area, the identification accuracy is determined, and when the difference degree is larger than the preset difference degree, the influence of the background on the identification result is smaller, the accuracy of the identification result is higher, and therefore the identification accuracy is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic diagram of a mobile terminal in one embodiment;
FIG. 2 is a diagram of a communication network system architecture in one embodiment;
fig. 3 is a flow chart of an image data processing method in one embodiment.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the following description, suffixes such as "module", "component", or "unit" for representing elements are used only for facilitating the description of the present invention, and have no specific meaning per se. Thus, "module," "component," or "unit" may be used in combination.
The terminal may be implemented in various forms. For example, the terminals described in the present invention may include mobile terminals such as cell phones, tablet computers, notebook computers, palm computers, personal digital assistants (Personal Digital Assistant, PDA), portable media players (Portable Media Player, PMP), navigation devices, wearable devices, smart bracelets, pedometers, and fixed terminals such as digital TVs, desktop computers, and the like.
The following description will be given taking a mobile terminal as an example, and those skilled in the art will understand that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal in addition to elements particularly used for a moving purpose.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present invention, the mobile terminal 100 may include: an RF (Radio Frequency) unit 101, a WiFi module 102, an audio output unit 103, an a/V (audio/video) input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, a processor 110, and a power supply 111. Those skilled in the art will appreciate that the mobile terminal structure shown in fig. 1 is not limiting of the mobile terminal and that the mobile terminal may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The following describes the components of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be used for receiving and transmitting signals during the information receiving or communication process, specifically, after receiving downlink information of the base station, processing the downlink information by the processor 110; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System of Mobile communication, global System for Mobile communications), GPRS (General Packet Radio Service ), CDMA2000 (Code Division Multiple Access, CDMA 2000), WCDMA (Wideband Code Division Multiple Access ), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access, time Division synchronous code Division multiple Access), FDD-LTE (Frequency Division Duplexing-Long Term Evolution, frequency Division Duplex Long term evolution), and TDD-LTE (Time Division Duplexing-Long Term Evolution, time Division Duplex Long term evolution), etc.
WiFi belongs to a short-distance wireless transmission technology, and a mobile terminal can help a user to send and receive e-mails, browse web pages, access streaming media and the like through the WiFi module 102, so that wireless broadband Internet access is provided for the user. Although fig. 1 shows a WiFi module 102, it is understood that it does not belong to the necessary constitution of a mobile terminal, and can be omitted entirely as required within a range that does not change the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a talk mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the mobile terminal 100. The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive an audio or video signal. The a/V input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphics processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 can receive sound (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound into audio data. The processed audio (voice) data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 101 in the case of a telephone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting the audio signal.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and the proximity sensor can turn off the display panel 1061 and/or the backlight when the mobile terminal 100 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for applications of recognizing the gesture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; as for other sensors such as fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured in the mobile phone, the detailed description thereof will be omitted.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the mobile terminal. In particular, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 1071 or thereabout by using any suitable object or accessory such as a finger, a stylus, etc.) and drive the corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 110, and can receive and execute commands sent from the processor 110. Further, the touch panel 1071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 107 may include other input devices 1072 in addition to the touch panel 1071. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc., as specifically not limited herein.
Further, the touch panel 1071 may overlay the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or thereabout, the touch panel 1071 is transferred to the processor 110 to determine the type of touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of touch event. Although in fig. 1, the touch panel 1071 and the display panel 1061 are two independent components for implementing the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 may be integrated with the display panel 1061 to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 108 serves as an interface through which at least one external device can be connected with the mobile terminal 100. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and an external device.
Memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 109 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power source 111 (e.g., a battery) for supplying power to the respective components, and preferably, the power source 111 may be logically connected to the processor 110 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described herein.
In order to facilitate understanding of the embodiments of the present invention, a communication network system on which the mobile terminal of the present invention is based will be described below.
Referring to fig. 2, fig. 2 is a schematic diagram of a communication network system according to an embodiment of the present invention, where the communication network system is an LTE system of a general mobile communication technology, and the LTE system includes a UE (User Equipment) 201, an e-UTRAN (Evolved UMTS Terrestrial Radio Access Network ) 202, an epc (Evolved Packet Core, evolved packet core) 203, and an IP service 204 of an operator that are sequentially connected in communication.
Specifically, the UE201 may be the terminal 100 described above, and will not be described herein.
The E-UTRAN202 includes eNodeB2021 and other eNodeB2022, etc. The eNodeB2021 may be connected with other eNodeB2022 by a backhaul (e.g., an X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide access from the UE201 to the EPC 203.
EPC203 may include MME (Mobility Management Entity ) 2031, hss (Home Subscriber Server, home subscriber server) 2032, other MMEs 2033, SGW (Serving Gate Way) 2034, pgw (PDN Gate Way) 2035 and PCRF (Policy and Charging Rules Function, policy and tariff function entity) 2036, and so on. The MME2031 is a control node that handles signaling between the UE201 and EPC203, providing bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location registers (not shown) and to hold user specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034 and PGW2035 may provide IP address allocation and other functions for UE201, PCRF2036 is a policy and charging control policy decision point for traffic data flows and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem ), or other IP services, etc.
Although the LTE system is described above as an example, it should be understood by those skilled in the art that the present invention is not limited to LTE systems, but may be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above mobile terminal hardware structure and the communication network system, various embodiments of the method of the present invention are provided.
As shown in fig. 3, in one embodiment, an image data processing method is provided. The present embodiment is mainly exemplified by the application of the method to the mobile terminal in fig. 1. Referring to fig. 2, the method for optimizing the game interface specifically includes the following steps:
step S301, a candidate recognition result in the current image area to be recognized is recognized.
Specifically, the current image area to be identified is an image area intercepted from the current image, and the current image area to be identified may be one or more, where the current image may be any video frame in a video being recorded by the terminal, or may be any video frame in a video frame obtained by downloading through a network. If the current image is any frame of screen recording data in the game process. The candidate recognition result is a recognition result obtained by recognizing the current image area to be recognized through a recognition algorithm, the recognition result can be a character, a specific mark and the like, and the specific recognition result can be customized according to requirements, for example, the recognition result is a triangle, a square, a letter A, a number 8 and the like.
Step S302, calculating the difference degree between the image features of the current image area to be identified and the image features of the candidate identification result.
Step S303, when the difference is larger than the preset difference, the candidate recognition result is used as the current target recognition result.
In particular, image features are used to describe features of an image, including but not limited to color features, texture features, and the like. The difference degree is used for measuring the data of the difference between the image features, wherein the calculation method of the difference degree can be set in a self-defined mode, the image features are taken as color features as an example, and color data of different image features in the same color space can be adopted for comparison. As in the RGB color space, the RGB values of the candidate recognition result may be employed as image features of the candidate recognition result. And calculating the degree of difference between the image features of the candidate recognition result and the image features of the current image region to be recognized.
The preset difference degree is a preset critical value for judging whether the candidate recognition result is effective or not, and is set according to specific conditions, and the critical values corresponding to different application scenes and different image features are different. The current target recognition result refers to a recognition result corresponding to the current image area to be recognized. When the difference is larger than the preset difference, the difference between the image features of the current image area to be identified and the image features of the candidate identification result is larger, the similarity of the image features and the candidate identification result is lower, the interference of the background on the candidate identification result is smaller, and the identification accuracy is higher.
In one embodiment, before step S301, the method further includes: and obtaining the priority level corresponding to each current image area to be identified.
In this embodiment, candidate recognition results in the current image region to be recognized with the highest priority level are recognized, and the difference between the image features of the current image region to be recognized with the highest priority level and the image features of the corresponding candidate recognition results is calculated; judging whether the difference degree of the current image area to be identified with the highest priority is larger than a preset difference degree, and taking the identification result of the current image area to be identified with the highest priority as the current target identification result when the difference degree of the current image area to be identified with the highest priority is larger than or equal to the preset difference degree.
Specifically, the current image area to be identified of the screenshot in the current image includes a plurality of different image areas, which may or may not overlap, and the different areas correspond to different priority levels, and the priority levels of the areas are preconfigured. And acquiring each current image to be identified and the corresponding priority level. The recognition result of the current image region to be recognized with high priority is preferentially recognized, so that the priority is high, the recognition result of the region is important, the recognition result of the current image region to be recognized with the highest priority is recognized, firstly, the recognition accuracy corresponding to the current image region to be recognized with the highest priority is judged, and the specific judging process is consistent with the steps of the step S302 and the step S303, and is not repeated here. Different priority levels are configured for different current image areas to be identified, so that whether the identification result is reliable or not can be quickly judged, and further, one-step operation is performed according to the identification result.
In one embodiment, when the difference degree of the current image area to be identified of the highest priority level is smaller than the preset difference degree, the method further includes: and acquiring the current image area to be identified of the next highest priority according to the priority level corresponding to the current image area to be identified, taking the current image area to be identified of the next highest priority as the current image area to be identified of the highest priority, and entering the step of judging whether the difference degree of the current image area to be identified of the highest priority is larger than the preset difference degree.
Specifically, when the difference between the current area of the highest priority and the corresponding candidate result is smaller than the preset difference, the background of the representation area affects the candidate recognition result greatly, the accuracy of the recognition result is lower, in order to improve the recognition accuracy, the current image area to be recognized of the next priority of the highest priority is obtained and used as the current image area to be recognized of the next highest priority, the recognition, the difference calculation and the difference judgment are repeatedly performed on the current image area to be recognized of the next highest priority, when the difference is larger than the preset difference, the candidate recognition result corresponding to the current image area to be recognized of the next highest priority is used as the basis for judging whether the recognition result is accurate, and when the difference is smaller than or equal to the preset difference, the current image area to be recognized of the next highest priority is used as the current image area to be recognized of the highest priority, and the processes are repeated.
In one embodiment, after taking the recognition result of the current image area to be recognized with the highest priority level as the current target recognition result, the method further comprises: judging whether the current target recognition result is larger than a preset recognition threshold value, judging that the current target recognition result is a correct recognition result when the current target recognition result is larger than the preset recognition threshold value, and judging that the current target recognition result is an incorrect recognition result when the current target recognition result is smaller than or equal to the preset recognition threshold value.
Specifically, the preset recognition threshold is a preset critical value for judging whether the recognition result is correct, the similarity between the current target recognition result and the preset recognition result is calculated, the higher the similarity is, the more accurate the recognition result is, when the similarity is larger than the preset recognition threshold, the current target recognition result is judged to be the correct recognition result, and otherwise, the current target recognition result is the wrong recognition result. Under the condition that the current target recognition result is determined according to the difference degree, the recognition result is further judged, and the recognition accuracy is improved.
In one embodiment, after taking the recognition result of the current image area to be recognized with the highest priority level as the current target recognition result, the method further comprises: acquiring an image area to be identified, which is different from the current image area to be identified, as a reference image area, taking the reference image area as the current image area to be identified in the candidate identification results in the current image area to be identified, and entering a step of identifying the candidate identification results in the current image area to be identified until a reference target identification result is obtained; judging the difference value between the reference target recognition result and the current target recognition result, and judging that the current target recognition result and the reference target recognition result are correct recognition results when the difference value is larger than a preset difference threshold value.
Specifically, the reference image area is an image area to be identified corresponding to an image corresponding to the previous time or the next time, and the current image area to be identified is an image area in an image at a different time. Wherein the previous time instant and the next time instant may be denoted as two adjacent sampling time instants or as different sampling time instants which are not adjacent. For example, when the video recording process is performed, after the current target recognition result corresponding to the current image area to be recognized at the current moment is recognized, in order to determine whether the current recognition result is valid, the difference value between the target recognition result corresponding to the previous moment or the next moment and the current target recognition result can be calculated by referring to the recognition result of the image area to be recognized corresponding to the previous moment or the next moment, whether the difference value is larger than the preset difference value is determined, the difference value is larger than the preset difference value, the probability that the difference value is larger is misrecognition, the recognition result is misrecognition, and otherwise, the correct recognition is determined. And further judging whether the recognition result is correct or not according to the recognition results at different moments, and improving the accuracy of the recognition result.
In one embodiment, when the current image area to be identified is a preset area in the game screen, after the candidate identification result is used as the current target identification result when the difference is greater than the preset difference, the method further includes: judging whether the adjacent correct recognition results in the correct recognition results are changed, when the correct recognition results are changed, intercepting a game picture corresponding to the changed correct recognition results, and taking the intercepted game picture as an image corresponding to the game highlight moment.
Specifically, the current image area to be identified is a preset area in a game picture, the current image is indicated as the game picture, in a game scene, the game interface image is processed, if capturing at a highlight moment is performed, whether the adjacent identification result of the specific area is jumped or not is judged first, the occurrence of the jump is indicated that a major event occurs in the game, at the moment, the important moment is taken as the important moment, the game picture corresponding to the important moment is reserved, and the image corresponding to the highlight moment of the whole game is obtained.
In one embodiment, after the candidate recognition result is used as the current target recognition result, the method further comprises: judging whether the current target identification result is a trigger identification result for starting screen recording, generating a screen recording instruction when the current target identification result is the trigger identification result for starting screen recording, executing the screen recording instruction, and starting screen recording.
Specifically, the trigger recognition result of starting the screen recording is a preset condition for triggering the screen recording, the condition for starting the screen recording can be set in a self-defined manner, and the corresponding screen recording conditions in different scenes can be different. If the recognition result triggering the screen recording start is the character A, generating a screen recording instruction when the target recognition result corresponding to the current image area to be recognized is recognized to contain the character A, executing the screen recording execution, and starting the screen recording. The screen recording is started through automatic identification of the image, so that the screen recording is more convenient.
In one embodiment, after the candidate recognition result is used as the current target recognition result, the method further comprises: judging whether the current target recognition result is a trigger recognition result for ending the screen recording, and when the current target recognition result is the trigger recognition result for ending the screen recording, generating an ending instruction, executing the ending instruction and ending the screen recording.
Specifically, as with the triggering start screen recording, a judging condition for triggering screen recording ending needs to be set, after the screen recording is started, whether the current target recognition result corresponding to the current image area to be recognized in the recorded video frame ends the screen recording triggering recognition result is judged, when the judging result is yes, an ending instruction is generated, the ending instruction is executed, and the screen recording is ended. The automatic triggering of the screen recording ending instruction enables the screen recording to be more convenient, and can avoid recording contents in a plurality of unnecessary time periods.
In one embodiment, generating a screen recording instruction, executing the screen recording instruction, and after starting screen recording, further comprising: and storing the video frames obtained by screen recording, identifying a first identification result of the image area to be identified of the current frame in the video frames, judging a matching result of the first identification result and a preset identification result, recording error identification times when the matching result is unsuccessful, taking the next frame as the current frame, entering the step of identifying the first identification result of the image area to be identified of the current frame in the video frames, generating an end instruction when the accumulated error identification times are equal to the preset identification times, executing the end instruction, and ending the screen recording.
Specifically, after the screen recording is started, a corresponding video frame is stored, if the video starts to be recorded at 9.00, the current time is 9.02 minutes, the video is still recorded, the current frame in the video frame is acquired, the first recognition result of the image area to be recognized of the current frame is recognized, whether the first recognition result is matched with a preset recognition result or not is judged, the matching indicates normal recognition and is not matched, the number of times of error recognition and normal and error recognition is recorded, when the accumulated number of times of error recognition exceeds the preset number of times of recognition, the game is indicated as a result, an ending instruction is generated, and the ending instruction is executed, so that the screen recording is ended. The preset identification result is a judging condition for judging whether the recorded content belongs to the effective content.
The image data processing method comprises the following steps: and identifying candidate identification results in the current image area to be identified, calculating the difference degree between the image features of the current image area to be identified and the image features of the candidate identification results, and taking the candidate identification results as current target identification results when the difference degree is larger than the preset difference degree. By judging the difference degree between the image area and the identification area, the identification accuracy is determined, and when the difference degree is larger than the preset difference degree, the influence of the background on the identification result is smaller, the accuracy of the identification result is higher, and therefore the identification accuracy is improved. The recognition accuracy of the image recognition result is improved through preprocessing and multi-condition processing.
In a specific embodiment, the image data processing method includes:
for better explanation, taking the wonderful moment of the recorded game as an example, when the game is recorded, the starting moment and the ending moment of the recorded game need to be identified, and a specific text message or a graphic message exists in a specific area for identifying the starting, the ending and the like of the game when the common game is started, and the skill use condition and the corresponding competitive result exist in the game process. Describing the night of the fort, at the beginning of the fort night game, there is an area on top of the game interface on the terminal for identifying the direction, which contains four characters: E. w, S and N, different characters represent different directions, when the characters are recognized in the area, as the game scene is changed continuously, or the color matching of the background and the color matching of the characters are small in the design, the situation of misrecognition is easy to occur in the recognition process, in order to ensure the accuracy of recognition, the color characteristics of the characters obtained according to the area recognition can be judged first to be compared with the color characteristics of the area, when the difference is large, the misrecognition possibility is small, the recognition result is correct, when the difference is small, the misrecognition possibility is large, the next area is further recognized in order to improve the recognition accuracy, and the recognition and judgment processes are repeated. In the specific recognition process, a binarization method of the image can be used for judging whether the difference degree of the colors is larger than a preset difference degree. For example, when the characters E, W, S, N and other characters of the picture at the beginning of the game have obvious RGB differences with the surrounding, the RGB values of the characters are used as the binarization standard of the regional picture, so that the accuracy of character recognition is improved.
After the game is identified to start, generating a screen recording instruction, executing the screen recording instruction, and starting screen recording. After the screen is recorded, capturing a highlight in the game process, outputting the highlight, wherein the highlight is mainly a playing scene, a competition result and the like in the game process, taking the playing scene as an example, when the playing quantity is identified, acquiring a current image area used for representing the playing quantity in a game interface, identifying the playing quantity in the current image area, when the data jump occurs, the game scene corresponding to the moment can be a game picture at the highlight moment, identifying the playing quantity at the moment, and judging the playing data in order to improve the accuracy of capturing the game picture at the highlight moment, if the playing data is less than the number of playing objects in participation, when the playing quantity exceeds the number of playing objects, indicating that the identification is wrong, otherwise, indicating that the moment is the highlight moment.
In one embodiment, the killing data in the historical image frames of the current image area are acquired for identification, whether the difference value between the number of the killing persons between the two is larger than a preset difference value is judged, and when the difference value is larger than the preset difference value, the identification is correct, otherwise, the identification is wrong.
In one embodiment, the killing data in the historical image frame of the current image is acquired for identification, whether the difference value between the number of the killing persons between the two is larger than a preset difference value is judged, when the difference value is larger than the preset difference value, whether the killing data of the current image is smaller than the number of the game objects in participation is judged, when the number of the game objects in participation is exceeded, the fact that the identification is wrong is indicated, and otherwise, the moment is a wonderful moment is indicated. If the number of killers in the previous second is 0 and the next second is 9, errors occur when the images are recognized as characters, and the problem that the game wonderful time of the user cannot be accurately obtained due to the recognition errors is solved.
After the screen recording is started, when the completed game picture is captured, generating a finishing instruction, executing the finishing instruction and finishing the screen recording. When the game ending picture is not captured, judging that the current image area can correctly return the identification result, and when the number of times that the identification result cannot be correctly returned exceeds the preset number of times, generating an ending instruction, executing the ending instruction and ending the screen recording. If the result of the game starting area is that the correct result cannot be returned for 5 times and the correct result cannot be returned by the number of the clicks, the game can be judged to be indeed withdrawn, and the screen recording is finished. When the game is identified to be finished, the problem that the normal screen recording logic can generate some abnormality because the screen is withdrawn too fast and the corresponding game finishing screen is not intercepted is solved.
Fig. 3 is a flow chart of an image data processing method in one embodiment. It should be understood that, although the steps in the flowchart of fig. 2 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
In one embodiment, a computer device is provided comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of when executing the computer program: and identifying candidate identification results in the current image area to be identified, calculating the difference degree between the image features of the current image area to be identified and the image features of the candidate identification results, and taking the candidate identification results as current target identification results when the difference degree is larger than the preset difference degree.
In one embodiment, the plurality of image areas to be identified currently, the processor when executing the computer program further performs the steps of: the method comprises the steps of obtaining priority levels corresponding to all current image areas to be identified, identifying candidate identification results in the current image areas to be identified with the highest priority level, calculating the difference degree between the image features of the current image areas to be identified with the highest priority level and the image features of the corresponding candidate identification results, judging whether the difference degree of the current image areas to be identified with the highest priority level is larger than a preset difference degree, and taking the identification result of the current image areas to be identified with the highest priority level as a current target identification result when the difference degree of the current image areas to be identified with the highest priority level is larger than or equal to the preset difference degree.
In one embodiment, when the difference degree of the current image area to be identified of the highest priority level is smaller than the preset difference degree, the processor executes the computer program to further implement the following steps: and acquiring the current image area to be identified of the next highest priority according to the priority level corresponding to the current image area to be identified, taking the current image area to be identified of the next highest priority as the current image area to be identified of the highest priority, and executing the step of identifying candidate identification results in the current image area to be identified of the highest priority.
In one embodiment, after taking the recognition result of the current image area to be recognized with the highest priority level as the current target recognition result, the processor further realizes the following steps when executing the computer program: judging whether the current target recognition result is larger than a preset recognition threshold value, judging that the current target recognition result is a correct recognition result when the current target recognition result is larger than the preset recognition threshold value, and judging that the current target recognition result is an incorrect recognition result when the current target recognition result is smaller than or equal to the preset recognition threshold value.
In one embodiment, after taking the recognition result of the current image area to be recognized with the highest priority level as the current target recognition result, the processor further realizes the following steps when executing the computer program: acquiring an image area to be identified, which is different from the current image area to be identified, as a reference image area, taking the reference image area as the current image area to be identified in the candidate identification results in the current image area to be identified, and entering a step of identifying the candidate identification results in the current image area to be identified until a reference target identification result is obtained; judging the difference value between the reference target recognition result and the current target recognition result, and judging that the current target recognition result and the reference target recognition result are correct recognition results when the difference value is larger than a preset difference threshold value.
In one embodiment, when the current image area to be identified is a preset area in the game screen, after taking the candidate identification result as the current target identification result when the difference is greater than the preset difference, the following steps are further implemented when the processor executes the computer program: judging whether the adjacent correct recognition results in the correct recognition results are changed, when the correct recognition results are changed, intercepting a game picture corresponding to the changed correct recognition result, and taking the intercepted game picture as an image corresponding to the game highlight moment.
In one embodiment, after taking the candidate recognition result as the current target recognition result, the processor when executing the computer program further performs the steps of: judging whether the current target identification result is a trigger identification result for starting screen recording, generating a screen recording instruction when the current target identification result is the trigger identification result for starting screen recording, executing the screen recording instruction, and starting screen recording.
In one embodiment, the screen recording instruction is generated, the screen recording instruction is executed, and after the screen recording is started, the processor executes the computer program to further realize the following steps: storing the video frame obtained by screen recording, recognizing a first recognition result of the image area to be recognized of the current frame in the video frame, judging whether the first recognition result is a preset recognition result or not, recording error recognition times when the recognition result is not the preset recognition result, taking the next frame as the current frame, entering the step of recognizing the first recognition result of the image area to be recognized of the current frame in the video frame, generating an ending instruction when the accumulated error recognition times are equal to the preset recognition times, executing the ending instruction, and ending the screen recording.
In one embodiment, after taking the candidate recognition result as the current target recognition result, the processor when executing the computer program further performs the steps of: judging whether the current target recognition result is a trigger recognition result for ending the screen recording, and when the current target recognition result is the trigger recognition result for ending the screen recording, generating an ending instruction, executing the ending instruction and ending the screen recording.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of: and identifying candidate identification results in the current image area to be identified, calculating the difference degree between the image features of the current image area to be identified and the image features of the candidate identification results, and taking the candidate identification results as current target identification results when the difference degree is larger than the preset difference degree.
In one embodiment, the plurality of currently identified image regions, the computer program when executed by the processor further performs the steps of: the method comprises the steps of obtaining priority levels corresponding to all current image areas to be identified, identifying candidate identification results in the current image areas to be identified with the highest priority level, calculating the difference degree between the image features of the current image areas to be identified with the highest priority level and the image features of the corresponding candidate identification results, judging whether the difference degree of the current image areas to be identified with the highest priority level is larger than a preset difference degree, and taking the identification result of the current image areas to be identified with the highest priority level as a current target identification result when the difference degree of the current image areas to be identified with the highest priority level is larger than or equal to the preset difference degree.
In one embodiment, when the degree of difference of the current image area to be identified of the highest priority level is smaller than the preset degree of difference, the computer program when executed by the processor further implements the steps of: and acquiring the current image area to be identified of the next highest priority according to the priority level corresponding to the current image area to be identified, taking the current image area to be identified of the next highest priority as the current image area to be identified of the highest priority, and executing the step of identifying candidate identification results in the current image area to be identified of the highest priority.
In one embodiment, after taking the recognition result of the current image area to be recognized of the highest priority level as the current target recognition result, the computer program when executed by the processor further implements the steps of: judging whether the current target recognition result is larger than a preset recognition threshold value, judging that the current target recognition result is a correct recognition result when the current target recognition result is larger than the preset recognition threshold value, and judging that the current target recognition result is an incorrect recognition result when the current target recognition result is smaller than or equal to the preset recognition threshold value.
In one embodiment, after taking the recognition result of the current image area to be recognized of the highest priority level as the current target recognition result, the computer program when executed by the processor further implements the steps of: acquiring an image area to be identified, which is different from the current image area to be identified, as a reference image area, taking the reference image area as the current image area to be identified in the candidate identification results in the current image area to be identified, and entering a step of identifying the candidate identification results in the current image area to be identified until a reference target identification result is obtained; judging the difference value between the reference target recognition result and the current target recognition result, and judging that the current target recognition result and the reference target recognition result are correct recognition results when the difference value is larger than a preset difference threshold value.
In one embodiment, when the current image area to be identified is a preset area in the game screen, after the candidate identification result is used as the current target identification result when the difference is greater than the preset difference, the computer program when executed by the processor further realizes the following steps: judging whether the adjacent correct recognition results in the correct recognition results are changed, when the correct recognition results are changed, intercepting a game picture corresponding to the changed correct recognition result, and taking the intercepted game picture as an image corresponding to the game highlight moment.
In one embodiment, after having identified the candidate recognition result as the current target recognition result, the computer program when executed by the processor further performs the steps of: judging whether the current target identification result is a trigger identification result for starting screen recording, generating a screen recording instruction when the current target identification result is the trigger identification result for starting screen recording, executing the screen recording instruction, and starting screen recording.
In one embodiment, the screen recording instruction is generated, the screen recording instruction is executed, and after the screen recording is started, the computer program is executed by the processor to further realize the following steps: storing the video frame obtained by screen recording, recognizing a first recognition result of the image area to be recognized of the current frame in the video frame, judging whether the first recognition result is a preset recognition result or not, recording error recognition times when the recognition result is not the preset recognition result, taking the next frame as the current frame, entering the step of recognizing the first recognition result of the image area to be recognized of the current frame in the video frame, generating an ending instruction when the accumulated error recognition times are equal to the preset recognition times, executing the ending instruction, and ending the screen recording.
In one embodiment, after having identified the candidate recognition result as the current target recognition result, the computer program when executed by the processor further performs the steps of: judging whether the current target recognition result is a trigger recognition result for ending the screen recording, and when the current target recognition result is the trigger recognition result for ending the screen recording, generating an ending instruction, executing the ending instruction and ending the screen recording.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A method of processing image data, the method comprising:
identifying candidate identification results in the current image area to be identified;
calculating the difference degree between the image characteristics of the current image area to be identified and the image characteristics of the candidate identification result;
when the difference degree is larger than a preset difference degree, the candidate recognition result is used as a current target recognition result;
the method comprises the steps of:
acquiring priority levels corresponding to the current image areas to be identified;
identifying candidate identification results in the current image area to be identified with the highest priority level;
calculating the difference degree between the image characteristics of the current image area to be identified with the highest priority level and the image characteristics of the corresponding candidate identification result;
judging whether the difference degree of the current image area to be identified of the highest priority level is larger than the preset difference degree;
when the difference degree of the current image area to be identified of the highest priority level is larger than or equal to the preset difference degree, taking the identification result of the current image area to be identified of the highest priority level as the current target identification result;
When the difference degree of the current image area to be identified of the highest priority level is smaller than the preset difference degree, acquiring the current image area to be identified of the next highest priority level according to the priority level corresponding to the current image area to be identified, taking the current image area to be identified of the next highest priority level as the current image area to be identified of the highest priority level, and entering a candidate identification result in the current image area to be identified of the highest priority level.
2. The method according to claim 1, wherein after the recognition result of the current image area to be recognized of the highest priority level is taken as the current target recognition result, further comprising:
judging whether the current target recognition result is larger than a preset recognition threshold value or not, and judging that the current target recognition result is a correct recognition result when the current target recognition result is larger than the preset recognition threshold value;
and when the current target recognition result is smaller than or equal to the preset recognition threshold value, judging that the current target recognition result is an error recognition result.
3. The method according to claim 1, wherein after the recognition result of the current image area to be recognized of the highest priority level is taken as the current target recognition result, further comprising:
Acquiring an image area to be identified, which is different from the current image to be identified, as a reference image area, taking the reference image area as the current image area to be identified in candidate identification results in the identified current image area, and entering the candidate identification results in the identified current image area to be identified until a reference target identification result is obtained;
and judging the difference value of the reference target recognition result and the current target recognition result, and judging that the current target recognition result and the reference target recognition result are correct recognition results when the difference value is larger than a preset difference threshold value.
4. A method according to claim 2 or 3, wherein when the current image area to be identified is a preset area in a game screen, and when the difference is greater than a preset difference, the candidate identification result is taken as a current target identification result, the method further comprises:
judging whether the adjacent correct recognition results are changed, when the correct recognition results are changed, intercepting a game picture corresponding to the changed correct recognition results, and taking the intercepted game picture as an image corresponding to the game highlight moment.
5. The method of claim 1, wherein after said using said candidate recognition result as a current target recognition result, further comprising:
judging whether the current target identification result is a trigger identification result for starting screen recording;
and when the current target identification result is the trigger identification result of starting screen recording, generating a screen recording instruction, executing the screen recording instruction and starting screen recording.
6. The method of claim 5, wherein generating the screen recording instruction, executing the screen recording instruction, and after starting the screen recording, further comprises:
storing a video frame obtained by screen recording, and identifying a first identification result of an image area to be identified of a current frame in the video frame;
judging whether the first recognition result is a preset recognition result or not, and recording the number of false recognition times when the recognition result is not the preset recognition result;
and taking the next frame as the current frame, entering a first identification result of an image area to be identified of the current frame in the video frame, generating an ending instruction when the accumulated error identification times are equal to the preset identification times, executing the ending instruction, and ending the screen recording.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 6 when the computer program is executed by the processor.
8. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN201910195309.3A 2019-03-14 2019-03-14 Image data processing method, computer device, and storage medium Active CN110009004B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910195309.3A CN110009004B (en) 2019-03-14 2019-03-14 Image data processing method, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910195309.3A CN110009004B (en) 2019-03-14 2019-03-14 Image data processing method, computer device, and storage medium

Publications (2)

Publication Number Publication Date
CN110009004A CN110009004A (en) 2019-07-12
CN110009004B true CN110009004B (en) 2023-09-01

Family

ID=67167136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910195309.3A Active CN110009004B (en) 2019-03-14 2019-03-14 Image data processing method, computer device, and storage medium

Country Status (1)

Country Link
CN (1) CN110009004B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110652726B (en) * 2019-09-27 2022-10-25 杭州顺网科技股份有限公司 Game auxiliary system based on image recognition and audio recognition
CN111462252B (en) * 2020-04-09 2023-10-24 北京爱笔科技有限公司 Method, device and system for calibrating camera device
CN111476231B (en) * 2020-06-22 2024-01-12 努比亚技术有限公司 Image area identification method, device and computer readable storage medium
CN111885303A (en) * 2020-07-06 2020-11-03 雍朝良 Active tracking recording and shooting visual method
CN112784835B (en) * 2021-01-21 2024-04-12 恒安嘉新(北京)科技股份公司 Method and device for identifying authenticity of circular seal, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424651A (en) * 2013-08-26 2015-03-18 株式会社理光 Method and system for tracking object
CN107297074A (en) * 2017-06-30 2017-10-27 努比亚技术有限公司 Game video method for recording, terminal and storage medium
CN107509115A (en) * 2017-08-29 2017-12-22 武汉斗鱼网络科技有限公司 A kind of method and device for obtaining live middle Wonderful time picture of playing
CN108803993A (en) * 2018-06-13 2018-11-13 南昌黑鲨科技有限公司 Exchange method, intelligent terminal and the computer readable storage medium of application program
CN109068150A (en) * 2018-08-07 2018-12-21 深圳市创梦天地科技有限公司 A kind of excellent picture extracting method, terminal and the computer-readable medium of video

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100464075B1 (en) * 2001-12-28 2004-12-30 엘지전자 주식회사 Video highlight generating system based on scene transition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424651A (en) * 2013-08-26 2015-03-18 株式会社理光 Method and system for tracking object
CN107297074A (en) * 2017-06-30 2017-10-27 努比亚技术有限公司 Game video method for recording, terminal and storage medium
CN107509115A (en) * 2017-08-29 2017-12-22 武汉斗鱼网络科技有限公司 A kind of method and device for obtaining live middle Wonderful time picture of playing
CN108803993A (en) * 2018-06-13 2018-11-13 南昌黑鲨科技有限公司 Exchange method, intelligent terminal and the computer readable storage medium of application program
CN109068150A (en) * 2018-08-07 2018-12-21 深圳市创梦天地科技有限公司 A kind of excellent picture extracting method, terminal and the computer-readable medium of video

Also Published As

Publication number Publication date
CN110009004A (en) 2019-07-12

Similar Documents

Publication Publication Date Title
CN110009004B (en) Image data processing method, computer device, and storage medium
CN108512625B (en) Anti-interference method for camera, mobile terminal and storage medium
CN109195213B (en) Mobile terminal screen control method, mobile terminal and computer readable storage medium
CN110180181B (en) Method and device for capturing wonderful moment video and computer readable storage medium
CN109934769B (en) Method, terminal and storage medium for long screenshot of screen
CN110275794B (en) Screen display method of terminal, and computer-readable storage medium
CN108198150B (en) Method for eliminating image dead pixel, terminal and storage medium
CN109584897B (en) Video noise reduction method, mobile terminal and computer readable storage medium
CN110189368B (en) Image registration method, mobile terminal and computer readable storage medium
CN109840444B (en) Code scanning identification method, equipment and computer readable storage medium
CN109151216B (en) Application starting method, mobile terminal, server and computer readable storage medium
CN112637410B (en) Method, terminal and storage medium for displaying message notification
CN108958936B (en) Application program switching method, mobile terminal and computer readable storage medium
CN110052024B (en) Game vibration adjusting method, adjusting device, mobile terminal and storage medium
CN109445945B (en) Memory allocation method of application program, mobile terminal, server and storage medium
CN114140797A (en) Image processing method, intelligent terminal and storage medium
CN111970738B (en) Network switching control method, device and computer readable storage medium
CN109711850B (en) Secure payment method, device and computer readable storage medium
CN110109676B (en) Compiling method, terminal and computer readable storage medium
CN110083423B (en) Interface jump method, terminal and computer readable storage medium
CN109740121B (en) Search method of mobile terminal, mobile terminal and storage medium
CN112346824A (en) Screen projection application control method and device and computer readable storage medium
CN109388947B (en) Background management method, device and computer readable storage medium
CN107562304B (en) Control method, mobile terminal and computer readable storage medium
CN111476231B (en) Image area identification method, device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant