CN117555637A - Scene recognition method and device and electronic equipment - Google Patents

Scene recognition method and device and electronic equipment Download PDF

Info

Publication number
CN117555637A
CN117555637A CN202311526001.5A CN202311526001A CN117555637A CN 117555637 A CN117555637 A CN 117555637A CN 202311526001 A CN202311526001 A CN 202311526001A CN 117555637 A CN117555637 A CN 117555637A
Authority
CN
China
Prior art keywords
scene
interface
result
results
target application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311526001.5A
Other languages
Chinese (zh)
Inventor
唐毓刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311526001.5A priority Critical patent/CN117555637A/en
Publication of CN117555637A publication Critical patent/CN117555637A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a scene recognition method, a scene recognition device and electronic equipment, and belongs to the technical field of interface processing. The method comprises the following steps: acquiring interface related data of a current interface of a target application in the running process of the target application; the interface related data comprises interface rendering data and interface touch data; obtaining a first scene result according to the interface rendering data and obtaining a second scene result according to the interface touch data; and determining the scene result of the target application according to the first scene result and the second scene result.

Description

Scene recognition method and device and electronic equipment
Technical Field
The application belongs to the technical field of interface processing, and particularly relates to a scene recognition method, a scene recognition device and electronic equipment.
Background
With the development of electronic devices, the variety of applications installed on the electronic devices is increasing. Some applications have higher requirements on the upper limit of the performance of the electronic device, while some applications have lower requirements on the performance of the electronic device, and still some applications only have higher requirements on the performance of the electronic device in certain special scenarios. Taking a game application as an example, the game application does not need high performance in all scenes, for example, the running effect of the game application can be good without high processor frequency in a simple settlement interface and a mall interface, and the high processor frequency can cause negative benefits on the battery and heat of a mobile phone, but in a complex combat scene, the running effect of the game application can be ensured with high electronic equipment performance. Therefore, how to accurately identify the application scene is one of the problems to be solved currently.
Disclosure of Invention
The embodiment of the application aims to provide a scene recognition method, a scene recognition device and electronic equipment, which can solve the problem that the application scene cannot be accurately recognized in the prior art.
In a first aspect, an embodiment of the present application provides a scene recognition method, where the method includes:
acquiring interface related data of a current interface of a target application in the running process of the target application; the interface related data comprises interface rendering data and interface touch data;
obtaining a first scene result according to the interface rendering data and obtaining a second scene result according to the interface touch data;
and determining the scene result of the target application according to the first scene result and the second scene result.
In a second aspect, an embodiment of the present application provides a scene recognition device, including:
the acquisition module is used for acquiring the interface related data of the current interface of the target application in the running process of the target application; the interface related data comprises interface rendering data and interface touch data;
the processing module is used for obtaining a first scene result according to the interface rendering data and obtaining a second scene result according to the interface touch data;
And the determining module is used for determining the scene result of the target application according to the first scene result and the second scene result.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the scene recognition method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the scene recognition method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the scene recognition method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the scene recognition method as described in the first aspect.
In the embodiment of the application, in the running process of the target application, by acquiring the interface related data of the current interface of the target application, the interface related data comprise interface rendering data and interface touch data, further, a first scene result is obtained according to the interface rendering data, a second scene result is obtained according to the interface touch data, and the scene result of the target application is determined according to the first scene result and the second scene result. Because the scene result of the target application is determined according to the first scene result and the second scene result, the accuracy of the scene recognition of the application can be improved.
Drawings
FIG. 1 is a schematic flow chart diagram of a scene recognition method according to one embodiment of the application;
FIG. 2 is a schematic interface diagram of a particular interface corresponding to a predetermined class of scenarios in accordance with one embodiment of the present application;
FIG. 3 is a schematic interface diagram of a particular interface corresponding to a predetermined class of scenes in accordance with another embodiment of the present application;
FIG. 4 is a schematic block diagram of a scene recognition device in accordance with an embodiment of the application;
FIG. 5 is a schematic block diagram of an electronic device in accordance with an embodiment of the present application;
fig. 6 is a schematic hardware configuration diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The scene recognition method provided by the embodiment of the application is described in detail below by means of specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a scene recognition method according to an embodiment of the present application. As shown in fig. 1, the method includes the following steps S102-S106:
s102, acquiring interface related data of a current interface of a target application in the running process of the target application; the interface related data includes interface rendering data and interface touch data.
Wherein the interface rendering data may include at least one of: drawing submission times, canvas buffering times, memory read-write quantity and generation times of target rendering data. Alternatively, the interface of the target application performs picture drawing (or rendering) using a graphics API (application programming interface ), and the present embodiment defines "behavior defined by application call image API" as rendering data. The action of drawing and submitting each time on the interface is recorded as one time, and the total number of drawing and submitting is the drawing and submitting number. The canvas buffering is the memory buffering for storing picture data in the drawing or rendering program, and the number of canvas buffering times is the canvas buffering times. The memory read/write amount includes a memory read amount and a memory write amount. The target rendering data may include predetermined special rendering functions, such as functions for making memory reads back or generating texture data, and so forth.
The interface touch data may include at least one of: high frequency click position, high frequency slide position. Optionally, when the interface touch data of the current interface is acquired, data such as the clicking times, the clicking positions, the sliding times, the sliding tracks, the sliding positions and the like of the user on the current interface can be acquired first, and then the high-frequency clicking positions and the high-frequency sliding positions of the user on the current interface are determined according to the acquired data. The high frequency click position may be a position where the number of clicks and/or the frequency of clicks is greater than or equal to a first preset threshold, and the high frequency slide position may be a position where the number of slides and/or the frequency of slides is greater than or equal to a second preset threshold. For example, the user slides a plurality of times at a certain position (the number of slides is greater than the second preset threshold), and the position is the high-frequency sliding position.
Different interfaces of the target application may be defined in terms of different frames, i.e. one interface for each frame. In this embodiment, the scene result (such as the interface scene type) of the interface corresponding to each frame may be determined according to the frame unit. The method for obtaining the interface related data is already the prior art, and will not be described herein.
S104, obtaining a first scene result according to the interface rendering data and obtaining a second scene result according to the interface touch data.
Optionally, comparing the interface rendering data with the interface rendering standard data corresponding to the predetermined class of scenes in the target application to obtain a first scene result. The first scenario result includes: and a first predetermined class of scene corresponding to the interface rendering standard data matched with the interface rendering data. The manner of acquiring the interface rendering standard data will be described in detail in the following embodiments.
Optionally, comparing the interface touch data with the interface touch standard data corresponding to the target application to obtain a second scene result. The second scenario result includes: and a second predetermined class of scene corresponding to the interface touch standard data matched with the interface touch data. The method for acquiring the interface touch standard data will be described in detail in the following embodiments.
S106, determining a scene result of the target application according to the first scene result and the second scene result.
The scene result of the target application may include an interface scene type of a current interface of the target application.
In the embodiment of the application, in the running process of the target application, by acquiring the interface related data of the current interface of the target application, the interface related data comprise interface rendering data and interface touch data, further, a first scene result is obtained according to the interface rendering data, a second scene result is obtained according to the interface touch data, and the scene result of the target application is determined according to the first scene result and the second scene result. Because the scene result of the target application is determined according to the first scene result and the second scene result, the accuracy of the scene recognition of the application can be improved.
In one embodiment, before acquiring the data related to the interface of the current interface of the target application in the running process of the target application, the interface standard data corresponding to the target application needs to be acquired in advance, including the interface rendering standard data and the interface touch standard data. The interface rendering criteria data includes at least one of: standard drawing submission times, standard canvas buffering times, standard memory read-write quantity and standard generation times of target rendering data. The interface touch standard data comprises at least one of the following: region position information of the standard high-frequency click region, region position information of the standard high-frequency slide region.
Firstly, starting a target application, entering a specific interface belonging to a predetermined class scene in the target application, and further obtaining interface rendering standard data of the specific interface; and acquiring interface touch standard data of the specific interface.
The predetermined class of scenes may be predetermined according to an application type of the target application. For example, when the target application is a game class application, the predetermined class scene may include a settlement scene, a mall scene, a fighting scene, and the like. Optionally, the interface of the target application makes use of a graphics API for picture drawing (or rendering). In a specific interface belonging to a predetermined class of scene, each action of drawing and submitting is recorded as one time, and the total number of drawing and submitting is the standard drawing and submitting number of the specific interface. The canvas buffering is the memory buffering for storing picture data in the drawing or rendering program, and the number of canvas buffering times in the specific interface is the standard canvas buffering times of the specific interface. The memory read-write quantity of the specific interface comprises the memory read quantity of the specific interface and the memory write quantity of the specific interface. The target rendering data may include predetermined special rendering functions, such as functions for making memory reads back or generating texture data, and so forth.
The standard high-frequency clicking area refers to an area with the clicking times and/or the clicking frequencies on a specific interface higher than a first preset threshold value; the standard high-frequency sliding region refers to a region in which the number of sliding times and/or the sliding frequency on the specific interface are higher than a second preset threshold. The region location information may include at least one of: region range, region shape, coordinate information of key points on the outline outside the region.
In this embodiment, the user using the target application or a professional in the related field may start the target application and enter into a specific interface belonging to a predetermined class of scene in the target application.
After the interface standard data of the specific interface is obtained, the interface standard data of the specific interface and the scene information of the predetermined scene can be stored in an associated mode. If the predetermined class of scenes corresponding to the target application comprises a plurality of predetermined class of scenes, interface standard data of the specific interface corresponding to each predetermined class of scenes are obtained, and then the interface standard data of the specific interface corresponding to each predetermined class of scenes and scene information of the predetermined class of scenes are stored in an associated mode. Optionally, the interface standard data and the scene information of the predetermined class of scenes may be stored locally in the terminal device or may be stored in the cloud device in an associated manner. When step S104 is executed, interface standard data associated with the scene information of each predetermined type of scene corresponding to the target application may be obtained from the terminal device local or cloud device, and then the interface related data of the current interface and the obtained interface standard data are compared, so as to obtain a first scene result and a second scene result.
In this embodiment, by acquiring the interface standard data corresponding to the target application in advance, the target application can quickly acquire the interface standard data corresponding to the target application in the running process, so that the scene result of the target application, that is, the interface scene type of the current interface of the target application, is quickly determined according to the interface standard data and the interface related data of the current interface, and accuracy and instantaneity of identifying the interface scene type are ensured.
In one embodiment, after the interface standard data of the specific interface is acquired, it is determined whether an unprocessed specific interface exists according to the scene information of the predetermined class of scenes. If the unprocessed specific interface exists, switching to the unprocessed specific interface, and acquiring interface rendering standard data and interface touch standard data of the unprocessed specific interface. If the unprocessed specific interface does not exist, determining interface standard data corresponding to the target application according to the acquired interface rendering standard data and interface touch standard data.
Wherein the scene information of the predetermined class of scenes comprises at least one of the number of scenes and the scene type. For example, if the number of scenes in the predetermined class of scenes is 4 and only the interface standard data of 3 specific interfaces are currently acquired, it is indicated that there are 1 unprocessed specific interfaces. For another example, the scene types of the predetermined type of scenes include a settlement scene and a mall scene, and when only the interface standard data of the specific interface corresponding to the settlement scene is currently acquired, it is indicated that there are 1 unprocessed specific interfaces (i.e. the specific interfaces corresponding to the mall interface).
In this embodiment, after the interface standard data of the specific interface is obtained each time, it is determined whether an unprocessed specific interface exists, and if the unprocessed specific interface exists, the unprocessed specific interface is continuously processed, so that it can be ensured that all specific interfaces corresponding to predetermined class scenes are processed, that is, it is ensured that the interface standard data of the specific interfaces corresponding to all predetermined class scenes can be obtained, and further, it is ensured that the interface scene type of the current interface can be accurately identified, and the situation that the interface scene type of the current interface cannot be accurately identified due to incomplete interface standard data is avoided.
In one embodiment, the interface-related data for the current interface includes interface rendering data. The interface rendering data includes at least one of: drawing submission times, canvas buffering times, memory read-write quantity and generation times of target rendering data. When a first scene result is obtained according to the interface rendering data, the interface rendering data of the current interface and the interface rendering standard data can be compared, and at least one of the following specific comparison can be performed: whether the drawing submission times are matched with the standard drawing submission times, whether the canvas buffering times are matched with the standard canvas buffering times, whether the memory read-write quantity is matched with the standard memory read-write quantity, and whether the generation times of the target rendering data are matched with the standard generation times. The first scene result is a comparison result satisfying at least one of the following: the drawing submission times are matched with the standard drawing submission times, the canvas buffering times are matched with the standard canvas buffering times, the memory read-write quantity is matched with the standard memory read-write quantity, and the generation times of the target rendering data are matched with the standard generation times.
Matching the number of rendering submissions with the standard number of rendering submissions may include: the number of drawing submissions is greater than or equal to the standard number of drawing submissions, the difference between the number of drawing submissions and the standard number of drawing submissions is less than or equal to a first preset difference threshold, the difference between the number of drawing submissions and the standard number of drawing submissions is minimum, and the number of drawing submissions is greater than or equal to a set threshold corresponding to the standard number of drawing submissions. The reason why the set threshold corresponding to the standard drawing submission number is set is that: considering that the drawing submission times on the interfaces corresponding to the same preset scene type are unstable, the drawing submission times of the current interface can be compared with a set threshold corresponding to the standard drawing submission times, and as long as the drawing submission times of the current interface are greater than or equal to the set threshold corresponding to the standard drawing submission times, the drawing submission times of the current interface can be determined to be matched with the standard drawing submission times.
Matching the canvas buffering times with the standard canvas buffering times may include: the canvas buffering times are larger than or equal to the standard canvas buffering times, the difference value between the canvas buffering times and the standard canvas buffering times is smaller than or equal to a second preset difference value threshold, the difference value between the canvas buffering times and the standard canvas buffering times is minimum, and the canvas buffering times are larger than or equal to a set threshold corresponding to the standard canvas buffering times. The reason why the set threshold corresponding to the standard canvas buffering times is set is that: considering that the canvas buffering times on the interfaces corresponding to the same preset scene type are unstable, the canvas buffering times of the current interface and the set threshold corresponding to the standard canvas buffering times can be compared, and as long as the canvas buffering times of the current interface are larger than or equal to the set threshold corresponding to the standard canvas buffering times, the canvas buffering times of the current interface can be determined to be matched with the standard canvas buffering times.
Matching the memory read-write amount with the standard memory read-write amount may include: the memory read-write quantity is larger than or equal to the standard memory read-write quantity, the difference value between the memory read-write quantity and the standard memory read-write quantity is smaller than or equal to a third preset difference value threshold value, the difference value between the memory read-write quantity and the standard memory read-write quantity is minimum, and the memory read-write quantity is larger than or equal to a set threshold value corresponding to the standard memory read-write quantity. The reason why the set threshold corresponding to the standard memory read-write quantity is set is that: considering that the memory read-write quantity on the interface corresponding to the same preset scene type has an unstable condition, the memory read-write quantity of the current interface can be compared with a set threshold value corresponding to the standard memory read-write quantity, and the memory read-write quantity of the current interface can be determined to be matched with the standard memory read-write quantity as long as the memory read-write quantity of the current interface is larger than or equal to the set threshold value corresponding to the standard memory read-write quantity.
Matching the number of times the target rendering data is generated with the standard number of times the target rendering data is generated may include: the generation times are larger than or equal to the standard generation times, the difference value between the generation times and the standard generation times is smaller than or equal to a fourth preset difference threshold value, the difference value between the generation times and the standard generation times is minimum, and the generation times are larger than or equal to a set threshold value corresponding to the standard generation times. The reason why the set threshold corresponding to the standard generation number is set is that: in consideration of the unstable condition of the generation times of the target rendering data on the interface corresponding to the same preset scene type, the generation times of the target rendering data of the current interface and the set threshold corresponding to the standard generation times can be compared, and as long as the generation times of the target rendering data of the current interface are greater than or equal to the set threshold corresponding to the standard generation times, the generation times of the target rendering data of the current interface can be determined to be matched with the standard generation times.
Optionally, according to the first scene result, determining a scene result of the target application from a plurality of predetermined class scenes. If the comparison result is that the interface rendering data is matched with the interface rendering standard data corresponding to the first predetermined class of scenes, the first predetermined class of scenes can be determined to be the scene result of the target application. Wherein the plurality of predetermined class scenes includes a first predetermined class scene.
In one case, the interface rendering data of the current interface is matched with the interface rendering standard data corresponding to the first predetermined class of scenes, at this time, the interface rendering standard data with the smallest gap between the interface rendering standard data of the current interface and the interface rendering data in the plurality of interface rendering standard data can be selected, and the first predetermined class of scenes corresponding to the interface rendering standard data are determined to be scene results of the target application. For example, the number of rendering submissions of the current interface is greater than both the number of standard rendering submissions corresponding to the first predetermined class scene X and the number of standard rendering submissions corresponding to the first predetermined class scene Y, and assuming that the number of standard rendering submissions corresponding to the first predetermined class scene Y is greater than the number of standard rendering submissions corresponding to the first predetermined class scene X, it is obvious that a gap between the number of rendering submissions of the current interface and the number of standard rendering submissions corresponding to the first predetermined class scene Y is the smallest, so that the first predetermined class scene Y can be determined as a scene result of the target application, that is, the first predetermined class scene Y is determined as an interface scene type of the current interface of the target application.
The interface rendering standard data may be processed into a format that can be interpreted by a program for storage, e.g., processed into a code format, a table format, etc. Table 1 below exemplifies several interface rendering criteria data, wherein the first application includes a game class application a (application a for short) and a game class application B (application B for short).
TABLE 1
For example, when the user uses the application a, the number of drawing submissions of the current interface is 110, which is greater than the set threshold corresponding to the standard number of drawing submissions of the combat interface of the application a, so that it can be determined that the interface scene type of the current interface is the combat interface. As can be seen from table 1, the set threshold values corresponding to different interface rendering standard data may be different.
In this embodiment, the interface rendering data of the current interface and the interface rendering standard data are compared, and the first scene result is determined according to the comparison result, optionally, the interface scene type of the current interface may also be determined from a plurality of predetermined types of scenes according to the first scene result.
In one embodiment, the interface related data of the current interface includes interface touch data. The interface touch data comprises at least one of the following: high frequency click position, high frequency slide position. And when a second scene result is obtained according to the interface touch data, comparing the interface touch data of the current interface with the interface touch standard data. The interface touch standard data comprises at least one of the following: region position information of the standard high-frequency click region, region position information of the standard high-frequency slide region. The area location information of the interface touch standard area may include at least one of the following: region range, region shape, coordinate information of key points on the outline outside the region.
When the interface touch data of the current interface and the interface touch standard data are compared, at least one of the following can be compared: whether the high-frequency click position is located within the standard high-frequency click region and whether the high-frequency slide position is located within the standard high-frequency slide region. The second scene result is a comparison result satisfying at least one of the following: the high frequency click position is located within the standard high frequency click region and the high frequency slide position is located within the standard high frequency slide region.
Optionally, determining a scene result of the target application from a plurality of predetermined class scenes according to the second scene result. And if the comparison result is that the interface touch data is matched with the interface touch standard data corresponding to the second predetermined class scene, determining that the second predetermined class scene is the scene result of the target application. Wherein the plurality of predetermined class scenes includes a second predetermined class scene.
The interface touch standard areas of the specific interfaces corresponding to different predetermined scenes can be the same or different. In order to make the recognition result of the interface scene type more accurate, the interface touch standard regions of the specific interfaces corresponding to different predetermined types of scenes are usually different. Fig. 2 exemplarily illustrates a standard high-frequency click region and a standard high-frequency slide region of a specific interface a corresponding to a predetermined class scene X, and fig. 3 exemplarily illustrates a standard high-frequency click region and a standard high-frequency slide region of a specific interface B corresponding to a predetermined class scene Y, and it can be seen that the standard high-frequency click region and the standard high-frequency slide region of the specific interface a and the specific interface B are different.
In this embodiment, the interface touch data of the current interface and the interface touch standard data are compared, and the second scene result is determined according to the comparison result, optionally, the interface scene type of the current interface may also be determined from a plurality of predetermined types of scenes according to the second scene result.
In one embodiment, the interface related data for the current interface includes interface rendering data and interface touch data. When determining a scene result of a target application according to a first scene result and a second scene result, there are the following cases:
and when the number of the first scene results is equal to 1 and the number of the second scene results is greater than 1, determining the first scene results as scene results of the target application.
And when the number of the first scene results is greater than 1 and the number of the second scene results is equal to 1, determining the second scene results as scene results of the target application.
And determining the scene result of the target application according to the number of the same scene results under the condition that the number of the first scene results and the number of the second scene results are both larger than 1 and the same scene results exist between the first scene results and the second scene results. When determining the scene result of the target application according to the number of the same scene results, determining the same scene result as the scene result of the target application under the condition that the number of the same scene results is equal to 1; and under the condition that the number of the same scene results is greater than 1, determining the scene result of the previous frame interface as the scene result of the target application.
And under the condition that the number of the first scene results and the number of the second scene results are equal to 1 and the first scene results are the same as the second scene results, determining the first scene results or the second scene results as scene results of the target application.
And determining the scene result of the previous frame interface of the target application as the scene result of the target application under the condition that the number of the first scene results and the number of the second scene results are equal to 1 and the first scene results are different from the second scene results.
Different interfaces of the target application may be defined in terms of different frames, i.e. one interface for each frame. The interface of the previous frame of the current interface is the interface of the previous frame of the current interface in the target application. In this embodiment, the interface scene type of the interface corresponding to each frame may be determined according to the frame unit. Thus, when there is a conflict between the first scene result and the second scene result, the scene result of the previous frame of interface may be taken as the scene result (i.e., the interface scene type) of the current interface of the target application.
In this embodiment, the scene result of the target application is determined according to the first scene result and the second scene result, and since the interface touch data and the interface rendering data can both accurately reflect the characteristics of the current interface of the target application in real time, the manner of identifying the interface scene type according to the interface touch data and the interface rendering data has higher real-time performance, accuracy and reliability. In addition, when a conflict occurs between the first scene result and the second scene result, if the same part of the first scene result and the second scene result is not unique, or the first scene result and the second scene result are different, since the single interface maintenance time in the unit of a frame is shorter and the difference between two adjacent interfaces is usually not large (for example, a plurality of adjacent interfaces are all of the same interface scene type), the recognition accuracy of the scene result of the current interface can be ensured to the greatest extent by taking the scene result of the previous frame interface as the scene result of the current interface.
In one embodiment, the target application is a game-like application. After the scene result of the target application is determined according to the first scene result and the second scene result, the target system performance parameter corresponding to the scene result of the target application can be determined, and then the target application is operated based on the target system performance parameter. Optionally, determining the target system performance parameter corresponding to the scene result of the target application according to the preset corresponding relation between each scene result and the system performance parameter of the terminal equipment.
The system performance parameters may include at least one of cpu (Central Processing Unit ) frequency, gpu (Graphics Processing Unit, graphics processor) frequency, among others.
The corresponding relation between the preset scene results and the system performance parameters of the terminal equipment can be stored locally in the terminal equipment or in the cloud equipment. After determining the scene result of the target application, the target system performance parameter corresponding to the determined scene result can be obtained from the terminal equipment local or cloud equipment, so that the target application is operated based on the target system performance parameter.
In this embodiment, by running the target application according to the target system performance parameter corresponding to the scenario result of the target application, the system performance parameter of the terminal device can be matched with the scenario result (i.e., the interface scenario type) of each interface of the target application, so that not only stable running of the target application is ensured, but also the device resource loss of the terminal device is reduced to the maximum extent, and the heating phenomenon of the terminal device is reduced. In addition, under the condition that the interface scene type is identified by taking the frame as a unit, the interface corresponding to each frame can be ensured to be matched with the system performance parameters of the terminal equipment, and a finer performance scheduling scheme is provided for the terminal equipment.
According to the scene recognition method provided by the embodiment of the application, the execution subject can be a scene recognition device. In the embodiment of the present application, a scene recognition device executes a scene recognition method by using a scene recognition device as an example, and the scene recognition device provided in the embodiment of the present application is described.
Fig. 4 is a schematic block diagram of a scene recognition device in accordance with an embodiment of the application. As shown in fig. 4, the scene recognition apparatus includes:
the acquiring module 41 is configured to acquire, during a running process of a target application, interface related data of a current interface of the target application; the interface related data comprises interface rendering data and interface touch data;
the processing module 42 is configured to obtain a first scene result according to the interface rendering data, and obtain a second scene result according to the interface touch data;
a determining module 43, configured to determine a scenario result of the target application according to the first scenario result and the second scenario result.
In one embodiment, the processing module 42 includes:
the first processing unit is used for obtaining the first scene result according to at least one of drawing submission times, canvas buffering times, memory read-write quantity and generation times of target rendering data.
In one embodiment, the processing module 42 includes:
and the second processing unit is used for obtaining the second scene according to at least one of the high-frequency clicking position and the high-frequency sliding position.
In one embodiment, the determining module 43 is configured to:
when the number of the first scene results is equal to 1 and the number of the second scene results is greater than 1, determining the first scene results as scene results of the target application;
when the number of the first scene results is greater than 1 and the number of the second scene results is equal to 1, determining the second scene results as scene results of the target application;
determining a scene result of the target application according to the number of the same scene results when the number of the first scene results and the number of the second scene results are both larger than 1 and the same scene result exists between the first scene result and the second scene result;
determining the first scene result or the second scene result as the scene result of the target application under the condition that the number of the first scene results and the number of the second scene results are equal to 1 and the first scene result is the same as the second scene result;
And determining the scene result of the last frame interface of the target application as the scene result of the target application under the condition that the number of the first scene results and the number of the second scene results are equal to 1 and the first scene results are different from the second scene results.
In one embodiment, the determining module 43 is further configured to:
under the condition that the number of the same scene results is equal to 1, determining that the same scene results are scene results of the target application;
and under the condition that the number of the same scene results is greater than 1, determining the scene result of the previous frame interface as the scene result of the target application.
In one embodiment, the target application is a game-like application;
the apparatus further comprises:
the second determining module is configured to determine, after determining the scene result of the target application according to the first scene result and the second scene result, a target system performance parameter corresponding to the scene result of the target application;
and the operation module is used for operating the target application based on the target system performance parameters.
In the embodiment of the application, in the running process of the target application, by acquiring the interface related data of the current interface of the target application, the interface related data comprise interface rendering data and interface touch data, further, a first scene result is obtained according to the interface rendering data, a second scene result is obtained according to the interface touch data, and the scene result of the target application is determined according to the first scene result and the second scene result. Because the scene result of the target application is determined according to the first scene result and the second scene result, the accuracy of the scene recognition of the application can be improved.
The scene recognition device in the embodiment of the application may be an electronic device, or may be a component in the electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The scene recognition device in the embodiment of the application may be a device with an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The scene recognition device provided in the embodiment of the present application can implement each process implemented by the method embodiment of fig. 1, and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 5, the embodiment of the present application further provides an electronic device 500, including a processor 501 and a memory 502, where the memory 502 stores a program or an instruction that can be executed on the processor 501, and the program or the instruction implements each step of the above-mentioned embodiment of the scene recognition method when executed by the processor 501, and can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 6 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: radio frequency unit 1001, network module 1002, audio output unit 1003, input unit 1004, sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, memory 1009, and processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1010 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 1010 is configured to obtain, during a running process of a target application, interface related data of a current interface of the target application; the interface related data comprises interface rendering data and interface touch data; obtaining a first scene result according to the interface rendering data and obtaining a second scene result according to the interface touch data; and determining the target according to the first scene result and the second scene result.
Optionally, the processor 1010 is further configured to obtain the first scene result according to at least one of a rendering submission number, a canvas buffering number, a memory read-write amount, and a generation number of target rendering data.
Optionally, the processor 1010 is further configured to obtain the second scene result according to at least one of a high-frequency click position and a high-frequency slide position.
Optionally, the processor 1010 is further configured to determine the first scenario result as the scenario result of the target application when the number of the first scenario results is equal to 1 and the number of the second scenario results is greater than 1; when the number of the first scene results is greater than 1 and the number of the second scene results is equal to 1, determining the second scene results as scene results of the target application; determining a scene result of the target application according to the number of the same scene results when the number of the first scene results and the number of the second scene results are both larger than 1 and the same scene result exists between the first scene result and the second scene result; determining the first scene result or the second scene result as the scene result of the target application under the condition that the number of the first scene results and the number of the second scene results are equal to 1 and the first scene result is the same as the second scene result; and determining the scene result of the last frame interface of the target application as the scene result of the target application under the condition that the number of the first scene results and the number of the second scene results are equal to 1 and the first scene results are different from the second scene results.
Optionally, the processor 1010 is further configured to determine that the same scene result is the scene result of the target application if the number of the same scene results is equal to 1; and under the condition that the number of the same scene results is greater than 1, determining the scene result of the previous frame interface as the scene result of the target application.
Optionally, the target application is a game application; the processor 1010 is further configured to determine, after determining the scene result of the target application according to the first scene result and the second scene result, a target system performance parameter corresponding to the scene result of the target application; and operating the target application based on the target system performance parameter.
In the embodiment of the application, in the running process of the target application, by acquiring the interface related data of the current interface of the target application, the interface related data comprise interface rendering data and interface touch data, further, a first scene result is obtained according to the interface rendering data, a second scene result is obtained according to the interface touch data, and the scene result of the target application is determined according to the first scene result and the second scene result. Because the scene result of the target application is determined according to the first scene result and the second scene result, the accuracy of the scene recognition of the application can be improved.
It should be understood that in the embodiment of the present application, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, and the graphics processor 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 1009 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the processes of the embodiment of the scene recognition method are implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running a program or an instruction, implementing each process of the above embodiment of the scene recognition method, and achieving the same technical effect, so as to avoid repetition, and no redundant description is provided herein.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product, which is stored in a storage medium, and the program product is executed by at least one processor to implement the respective processes of the above-mentioned embodiment of the scene recognition method, and achieve the same technical effects, so that repetition is avoided, and a detailed description is omitted here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (11)

1. A method of scene recognition, the method comprising:
acquiring interface related data of a current interface of a target application in the running process of the target application; the interface related data comprises interface rendering data and interface touch data;
obtaining a first scene result according to the interface rendering data and obtaining a second scene result according to the interface touch data;
and determining the scene result of the target application according to the first scene result and the second scene result.
2. The method of claim 1, wherein the obtaining a first scene result from the interface rendering data comprises:
and obtaining the first scene result according to at least one of drawing submission times, canvas buffering times, memory read-write quantity and generation times of target rendering data.
3. The method of claim 1, wherein the obtaining a second scene result according to the interface touch data comprises:
and obtaining the second scene result according to at least one of the high-frequency clicking position and the high-frequency sliding position.
4. The method of claim 1, wherein the determining the scenario outcome of the target application from the first scenario outcome and the second scenario outcome comprises:
When the number of the first scene results is equal to 1 and the number of the second scene results is greater than 1, determining the first scene results as scene results of the target application;
when the number of the first scene results is greater than 1 and the number of the second scene results is equal to 1, determining the second scene results as scene results of the target application;
determining a scene result of the target application according to the number of the same scene results when the number of the first scene results and the number of the second scene results are both larger than 1 and the same scene result exists between the first scene result and the second scene result;
determining the first scene result or the second scene result as the scene result of the target application under the condition that the number of the first scene results and the number of the second scene results are equal to 1 and the first scene result is the same as the second scene result;
and determining the scene result of the last frame interface of the target application as the scene result of the target application under the condition that the number of the first scene results and the number of the second scene results are equal to 1 and the first scene results are different from the second scene results.
5. The method of claim 4, wherein determining the scenario outcome of the target application based on the number of identical scenario outcomes comprises:
under the condition that the number of the same scene results is equal to 1, determining that the same scene results are scene results of the target application;
and under the condition that the number of the same scene results is greater than 1, determining the scene result of the previous frame interface as the scene result of the target application.
6. The method of claim 1, wherein the target application is a game-like application;
after determining the scene result of the target application according to the first scene result and the second scene result, the method further includes:
determining target system performance parameters corresponding to the scene results of the target application;
and operating the target application based on the target system performance parameter.
7. A scene recognition device, the device comprising:
the acquisition module is used for acquiring the interface related data of the current interface of the target application in the running process of the target application; the interface related data comprises interface rendering data and interface touch data;
The processing module is used for obtaining a first scene result according to the interface rendering data and obtaining a second scene result according to the interface touch data;
and the determining module is used for determining the scene result of the target application according to the first scene result and the second scene result.
8. The apparatus of claim 7, wherein the processing module comprises:
the first processing unit is used for obtaining the first scene result according to at least one of drawing submission times, canvas buffering times, memory read-write quantity and generation times of target rendering data.
9. The apparatus of claim 7, wherein the processing module comprises:
and the second processing unit is used for obtaining the second scene according to at least one of the high-frequency clicking position and the high-frequency sliding position.
10. The apparatus of claim 7, wherein the determining module is configured to:
when the number of the first scene results is equal to 1 and the number of the second scene results is greater than 1, determining the first scene results as scene results of the target application;
when the number of the first scene results is greater than 1 and the number of the second scene results is equal to 1, determining the second scene results as scene results of the target application;
Determining a scene result of the target application according to the number of the same scene results when the number of the first scene results and the number of the second scene results are both larger than 1 and the same scene result exists between the first scene result and the second scene result;
determining the first scene result or the second scene result as the scene result of the target application under the condition that the number of the first scene results and the number of the second scene results are equal to 1 and the first scene result is the same as the second scene result;
and determining the scene result of the last frame interface of the target application as the scene result of the target application under the condition that the number of the first scene results and the number of the second scene results are equal to 1 and the first scene results are different from the second scene results.
11. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the scene recognition method of any of claims 1-6.
CN202311526001.5A 2023-11-15 2023-11-15 Scene recognition method and device and electronic equipment Pending CN117555637A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311526001.5A CN117555637A (en) 2023-11-15 2023-11-15 Scene recognition method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311526001.5A CN117555637A (en) 2023-11-15 2023-11-15 Scene recognition method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN117555637A true CN117555637A (en) 2024-02-13

Family

ID=89819974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311526001.5A Pending CN117555637A (en) 2023-11-15 2023-11-15 Scene recognition method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN117555637A (en)

Similar Documents

Publication Publication Date Title
CN108961157B (en) Picture processing method, picture processing device and terminal equipment
CN112532885B (en) Anti-shake method and device and electronic equipment
CN112269522A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN112083854A (en) Application program running method and device
CN109358927B (en) Application program display method and device and terminal equipment
CN112449110B (en) Image processing method and device and electronic equipment
CN116107531A (en) Interface display method and device
CN117555637A (en) Scene recognition method and device and electronic equipment
CN111813988B (en) HNSW node deletion method, system, device and medium for image feature library
CN112261483B (en) Video output method and device
CN112150486B (en) Image processing method and device
CN111599449B (en) Automatic playing method, device and equipment of electronic image and storage medium
CN114245017A (en) Shooting method and device and electronic equipment
CN114253449A (en) Screen capturing method, device, equipment and medium
CN113489901B (en) Shooting method and device thereof
CN113835809B (en) Hiding method and device
US20240184434A1 (en) Display method and apparatus
CN113780026B (en) Graphic code identification method, graphic code identification device, electronic equipment and storage medium
CN116382722A (en) Application program updating method, device and equipment
CN114170289A (en) Image processing method, image processing device, computer equipment and computer readable storage medium
CN117666889A (en) Method and device for processing association relationship and electronic equipment
CN116227520A (en) Information display method and device
CN116010637A (en) Image recommendation method and device, electronic equipment and storage medium
CN117319549A (en) Multimedia data selection method and device
CN115167701A (en) Method and device for preventing false touch

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination