CN113157455A - Memory management method and device, electronic equipment and computer readable storage medium - Google Patents

Memory management method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113157455A
CN113157455A CN202110457259.9A CN202110457259A CN113157455A CN 113157455 A CN113157455 A CN 113157455A CN 202110457259 A CN202110457259 A CN 202110457259A CN 113157455 A CN113157455 A CN 113157455A
Authority
CN
China
Prior art keywords
memory
data object
occupied
data objects
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110457259.9A
Other languages
Chinese (zh)
Inventor
庞雨生
梁本志
李建全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110457259.9A priority Critical patent/CN113157455A/en
Publication of CN113157455A publication Critical patent/CN113157455A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application provides a memory management method, a memory management device, electronic equipment and a computer readable storage medium; the method comprises the following steps: injecting a memory acquisition program into a running process; acquiring occupied memory of a plurality of data objects corresponding to a process through an injected memory acquisition program; determining the difference of occupied memory of the data object at different moments; and determining the data objects with memory leakage in the plurality of data objects according to the memory occupation difference corresponding to the plurality of data objects respectively, thereby positioning the root cause of the memory leakage. By the method and the device, the data object with the memory leakage can be accurately determined, and the memory management effect is improved.

Description

Memory management method and device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to computer technologies, and in particular, to a memory management method and apparatus, an electronic device, and a computer-readable storage medium.
Background
The memory is an important component of the electronic device, all processes of the electronic device run in the memory, and the strength of the memory performance affects the overall operating efficiency of the electronic device. Memory leakage may occur during the running process of a process, where the memory leakage refers to a situation that the memory allocated to the process cannot be correctly released due to some reasons, and further, the running efficiency of the process is low, the process crashes, or even the operating system crashes.
In the solutions provided in the related art, usually, the process codes are modified in an intrusive manner by a human, so as to collect the number of times that a plurality of data objects corresponding to the process are referred to, and then, whether the memory leakage occurs in the data object is determined according to the number of times that the data objects are referred to. However, the correlation between the number of times of reference and the memory leak is weak, for example, a certain data object is referred only once, but in practice, a serious memory leak may still occur. Therefore, in the solutions provided in the related art, the data object with memory leak cannot be accurately located, and the memory management effect is poor.
Disclosure of Invention
The embodiment of the application provides a memory management method and device, an electronic device and a computer-readable storage medium, which can accurately determine a data object with memory leakage and improve the memory management effect.
The technical scheme of the embodiment of the application is realized as follows:
an embodiment of the present application provides a memory management method, including:
injecting a memory acquisition program into a running process;
acquiring occupied memory of a plurality of data objects corresponding to the process through the injected memory acquisition program;
determining the difference of occupied memory of the data object at different moments;
and determining the data objects with memory leakage in the plurality of data objects according to the difference of the occupied memories corresponding to the plurality of data objects respectively.
An embodiment of the present application provides a memory management device, including:
the injection module is used for injecting the memory acquisition program into the running process;
the acquisition module is used for acquiring the occupied memory of the data objects corresponding to the process through the injected memory acquisition program;
the difference determining module is used for determining the difference of the occupied memories of the data objects at different moments;
and the screening module is used for determining the data objects with memory leakage in the plurality of data objects according to the difference of the occupied memories corresponding to the plurality of data objects respectively.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the memory management method provided by the embodiment of the application when executing the executable instructions stored in the memory.
An embodiment of the present application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the computer-readable storage medium to implement the memory management method provided in the embodiment of the present application.
The embodiment of the application has the following beneficial effects:
and injecting a memory acquisition program into the running process, acquiring occupied memories of a plurality of data objects corresponding to the process through the injected memory acquisition program, and determining the data objects with memory leakage in the plurality of data objects according to the difference of the occupied memories of the data objects at different moments. Therefore, on one hand, the occupied memory is collected in a non-invasive mode (namely, a memory collection program is injected), and the accuracy of the collected occupied memory can be ensured; on the other hand, the data object with memory leakage is determined by occupying memory difference, so that the precision of positioning the memory leakage source can be improved, namely the memory management effect can be improved.
Drawings
Fig. 1 is a schematic structural diagram of a memory management system according to an embodiment of the present application;
fig. 2 is a schematic architecture diagram of a terminal device provided in an embodiment of the present application;
fig. 3A is a schematic flowchart of a memory management method according to an embodiment of the present application;
fig. 3B is a schematic flowchart of a memory management method according to an embodiment of the present application;
fig. 3C is a schematic flowchart of a memory management method according to an embodiment of the present application;
fig. 3D is a schematic flowchart of a memory management method according to an embodiment of the present application;
fig. 3E is a schematic flowchart of a memory management method according to an embodiment of the present application;
fig. 4A is a schematic diagram of a memory map provided by an embodiment of the present application;
fig. 4B is a schematic diagram of a memory map provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a memory map provided by an embodiment of the present application;
fig. 6 is a schematic flow chart of memory collection provided in the embodiment of the present application;
fig. 7 is a schematic flow chart illustrating a memory leak analysis according to an embodiment of the present disclosure;
fig. 8 is a schematic flowchart of generating a memory distribution diagram according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first", "second", and the like, are only to distinguish similar objects and do not denote a particular order, but rather the terms "first", "second", and the like may be used interchangeably with the order specified, where permissible, to enable embodiments of the present application described herein to be practiced otherwise than as specifically illustrated or described herein. In the following description, reference to the term "plurality" means at least two, and so on.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) Memory (Memory): the main components of electronic devices, also called internal memory and main memory, can be used for storing arithmetic data in a Central Processing Unit (CPU) and storing data exchanged with an external memory such as a hard disk. All processes of the electronic equipment run in the memory, and the overall running efficiency of the electronic equipment is influenced by the strength of the performance of the memory, so that the running efficiency of the processes and the electronic equipment can be improved by effectively managing the memory. In the embodiment of the present application, the occupied memory refers to an occupied memory space, and the occupied memory unit includes, but is not limited to, bytes (KB), Megabytes (MB), Gigabytes (GB), and Terabytes (TB).
2) Memory Leak (Memory Leak): meaning that a process (or data object in a process) fails to release memory that is no longer being used due to negligence or error. A memory leak does not refer to a physical disappearance of memory, but rather to a loss of control of the memory before releasing the memory after allocating the memory to a process (or a data object in a process) for some reason (e.g., a design error), thereby resulting in a waste of the memory. In this embodiment, a memory leak may refer to adding data to a container (e.g., a data object) without interruption, so that the occupied memory of the container gradually increases, and of course, the memory leak is not limited thereto.
3) Process (Process): the program in the electronic device is a basic unit for resource allocation and scheduling of an operating system with respect to a running activity on a data set, and is also the basis of the operating system structure. A process may be viewed in a sense as an instance of a program that is running. In the embodiment of the present application, memory management may be performed on any process run by the electronic device, and the function of the process is not limited, for example, the process may be used for screen display, network connection, software/hardware maintenance, firewall maintenance, virtual scene running (such as a game virtual scene), and the like.
4) Injecting: the method is characterized in that the code which is expected to run is injected into other running processes, so that the injected code is automatically run in the running process of the processes. The injection is non-intrusive and can ensure that the original code of the process is not changed. In the embodiment of the present application, a memory collection program may be injected into a running process, so as to collect and occupy a memory through the memory collection program, where the type of the memory collection program is not limited, and the memory collection program may be any form of code for implementing a memory collection function.
5) Data object: data structures (Data structures) are the way computers store and organize Data, such as arrays, stacks, queues, etc., and Data objects refer to objects created from Data structures. For example, in a software project using the Lua language, the data structure is a Table (Table), and the data objects are the tables actually created in the software project.
6) Shared Object (SO) library: also known as a shared object file, includes binary code that an electronic device can directly run. In the embodiment of the application, the injection of the memory collection program can be realized by loading the shared object library corresponding to the memory collection program.
7) Virtual scene: by utilizing scenes which are output by electronic equipment and are different from the real world, visual perception of a virtual scene can be formed through naked eyes or assistance of equipment, such as two-dimensional images output through a display screen, and three-dimensional images output through stereoscopic display technologies such as stereoscopic projection, virtual reality and augmented reality technologies; in addition, various real-world-simulated perceptions such as auditory perception, tactile perception, olfactory perception, motion perception and the like can be formed through various possible hardware. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application.
For the problem of memory leak, in the solutions provided in the related art, usually, the code of the process is modified in an intrusive manner by a person to collect the number of times that a plurality of data objects corresponding to the process are referred to, and then whether the memory leak occurs in the data object is determined according to the number of times that the data objects are referred to. This solution has at least the following problems: 1) the bottom layer codes of the process need to be modified manually, the operation is more complicated, and the learning cost and the operation and maintenance cost are higher; 2) the intrusive modification can cause the increase of the occupied memory at the process side, namely, the occupied memory for collecting the number of times of reference can be counted into the occupied memory of the process, which is not beneficial to positioning the source of memory leakage; 3) the correlation between the number of times of reference and the memory leak is weak, for example, a certain data object is referred only once, but still a relatively serious memory leak may occur, and another data object is referred many times, but the memory leak does not occur actually, so that the data object with the memory leak cannot be accurately located according to the number of times of reference. In summary, in the solutions provided in the related art, the effect of memory management is poor, and the memory is easily occupied by invalid, which results in low operating efficiency of the process and the electronic device.
The embodiment of the application provides a memory management method and device, an electronic device and a computer-readable storage medium, which can accurately determine a data object with memory leakage and improve the memory management effect. An exemplary application of the electronic device provided in the embodiment of the present application is described below, and the electronic device provided in the embodiment of the present application may be implemented as various types of terminal devices, and may also be implemented as a server.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a memory management system 100 according to an embodiment of the present application, a terminal device 400 is connected to a server 200 through a network 300, and the server 200 is connected to a database 500, where the network 300 may be a wide area network or a local area network, or a combination of the two.
In some embodiments, taking the electronic device as a terminal device as an example, the memory management method provided in the embodiments of the present application may be implemented by the terminal device. For example, the terminal device 400 may inject a memory collection program into a running process in the terminal device 400, and collect memory occupied by a plurality of data objects corresponding to the process through the injected memory collection program, where the memory collection program may be pre-stored in the terminal device 400, or may be acquired by the terminal device 400 from the outside. For each data object, the terminal device 400 determines the difference in occupied memory between the occupied memories of the data object at different times. According to the difference of the occupied memory corresponding to each data object, the terminal device 400 determines the data object with memory leakage from the plurality of data objects, so as to realize the positioning of the memory leakage.
In some embodiments, taking the electronic device as a server as an example, the memory management method provided in the embodiments of the present application may also be implemented by the server. For example, the server 200 may inject a memory collection program into a running process in the server 200, and after a series of processing, the server 200 determines a data object with a memory leak among a plurality of data objects corresponding to the process, so as to realize the location of the memory leak. The memory collection program may be acquired by the server 200 from the database 500, or may be pre-stored in the server 200 (e.g., in a distributed file system of the server 200).
In some embodiments, the memory management method provided in the embodiments of the present application may also be implemented by a server and a terminal device in a cooperative manner. For example, the terminal device 400 may send the memory collection program to the server 200, so that the server 200 performs memory management on the running process according to the received memory collection program. For example, the server 200 may send the memory collection program to the terminal device 400, so that the terminal device 400 performs memory management on the running process according to the received memory collection program. For example, in the process of memory management, the terminal device 400 may send the memory occupied by the collected multiple data objects to the server 200; for the server 200, according to the received occupied memories of the plurality of data objects, the occupied memory difference between the occupied memories of the data objects at different times can be determined, and according to the occupied memory difference, the data object with memory leak can be determined in the plurality of data objects, and then, the server 200 can notify the terminal device 400 according to the data object with memory leak, so that the memory management efficiency can be improved by means of the computing capability of the server 200.
The embodiment of the present application does not limit the type of the process, and for example, the process may be used to run a virtual scene (e.g., a game virtual scene). In some embodiments, the terminal device 400 may calculate data required for display through the graphics computing hardware, and complete loading, parsing, and rendering of the display data, and output a video frame capable of forming visual perception on a virtual scene at the graphics output hardware, for example, a two-dimensional video frame is presented on a display screen of a smart phone, or a video frame realizing a three-dimensional display effect is projected on a lens of an augmented reality/virtual reality glasses; furthermore, in order to enrich the perception effect, the terminal device may also form one or more of auditory perception (e.g., by means of a microphone), tactile perception (e.g., by means of a vibrator), motion perception, and taste perception by means of different hardware, which is exemplified here in the case of presenting a virtual scene. In the running process (presentation process) of the virtual scene, the terminal device 400 may perform memory management on the process at regular time or in case of user trigger. In the case that user triggering is required, as shown in fig. 1, the terminal device 400 may present an option for memory management (for example, in a virtual scene), and perform memory management on the process when receiving a triggering operation on the option for memory management.
In some embodiments, the server 200 may also perform memory management on a process for running a virtual scene, in this case, the server 200 may be a background server of the virtual scene. For example, the server 200 performs calculation of the virtual scene related display data and transmits the virtual scene related display data to the terminal device 400, and the terminal device 400 relies on the graphics computing hardware to complete loading, parsing and rendering of the calculation display data and relies on the graphics output hardware to output the virtual scene to form visual perception. In the running process (presentation process) of the virtual scene, the server 200 may perform memory management on the process at regular time or upon receiving a memory management request from the terminal device 400. For example, the terminal device 400 may present an option for memory management and send a memory management request to the server 200 upon receiving a trigger operation for the option for memory management.
In some embodiments, various results (such as a memory collection program, a difference between occupied memory and occupied memory, and the like) involved in the memory management process can be stored in the blockchain, and since the blockchain has a non-falsification characteristic, the accuracy of data in the blockchain can be ensured. The electronic device may send a query request to the blockchain to query data stored in the blockchain, for example, when it is necessary to determine the difference in occupied memory, the electronic device may query the occupied memory stored in the blockchain at different times.
In some embodiments, the terminal device 400 or the server 200 may implement the memory management method provided in the embodiment of the present application by running a computer program, where the computer program is, for example, the client 410 in fig. 1. For example, the computer program may be a native program or a software module in an operating system; may be a Native Application (Application), i.e., a program that needs to be installed in an operating system to run, such as a military simulation program, a game Application; or may be an applet, i.e. a program that can be run only by downloading it to the browser environment; but also an applet that can be embedded in any APP, which applet can be run or shut down by user control. In general, the computer programs described above may be any form of application, module or plug-in. As for the game application, it may be any one of First-Person shooter (FPS) game, Third-Person shooter (TPS) game, Multiplayer Online Battle Arena (MOBA) game, and Multiplayer gunfight live game, and the like, which is not limited in this respect.
In some embodiments, a server (e.g., the server 200 in fig. 1) may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), and a big data and artificial intelligence platform, where the cloud service may be a memory management service for a terminal device to call. The terminal device (e.g., terminal device 400 in fig. 1) may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart television, a smart watch, and the like. The terminal device and the server may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present application is not limited.
In some embodiments, the database (e.g., database 500 of FIG. 1) and the server (e.g., server 200 of FIG. 1) may be provided independently. In some embodiments, the database and the server may also be integrated, i.e., the database may be considered integrated with the server, which may provide the related functions of the database.
Taking the electronic device provided in the embodiment of the present application as an example for illustration, it can be understood that, for the case where the electronic device is a server, parts (such as the user interface, the presentation module, and the input processing module) in the structure shown in fig. 2 may be default. Referring to fig. 2, fig. 2 is a schematic structural diagram of a terminal device 400 provided in an embodiment of the present application, where the terminal device 400 shown in fig. 2 includes: at least one processor 410, memory 450, at least one network interface 420, and a user interface 430. The various components in the terminal device 400 are coupled together by a bus system 440. It is understood that the bus system 440 is used to enable communications among the components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 440 in fig. 2.
The Processor 410 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable the presentation of media content. The user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 450 optionally includes one or more storage devices physically located remote from processor 410.
The memory 450 includes either volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 450 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, to support various operations, as exemplified below.
An operating system 451, including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
a network communication module 452 for communicating to other electronic devices via one or more (wired or wireless) network interfaces 420, exemplary network interfaces 420 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 453 for enabling presentation of information (e.g., user interfaces for operating peripherals and displaying content and information) via one or more output devices 431 (e.g., display screens, speakers, etc.) associated with user interface 430;
an input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the memory management device provided in this embodiment of the present application may be implemented in software, and fig. 2 illustrates the memory management device 455 stored in the storage 450, which may be software in the form of programs and plug-ins, and includes the following software modules: an injection module 4551, an acquisition module 4552, a variance determination module 4553 and a screening module 4554, which are logical and thus may be arbitrarily combined or further split depending on the functions implemented. The functions of the respective modules will be explained below.
The memory management method provided by the embodiment of the present application will be described with reference to exemplary applications and implementations of the electronic device provided by the embodiment of the present application.
Referring to fig. 3A, fig. 3A is a schematic flowchart of a memory management method according to an embodiment of the present application, and the steps shown in fig. 3A will be described.
In step 101, a memory capture program is injected into a running process.
Here, the type of the memory collection program is not limited, and may be any form of code for implementing a memory collection function, such as a memory collection script, for example. For the process which needs to perform memory management, in the embodiment of the present application, memory collection is implemented in a non-invasive manner, for example, a memory collection program may be injected into a running process, so that, on the basis of not destroying a native code of the process, the injected memory collection program can be simultaneously run in the running process of the process, and meanwhile, the memory occupied by the memory collection program in the running process is not counted in the memory occupied by the process.
The injection manner is not limited in the embodiment of the present application, and for example, an SO library (or SO library file) corresponding to the memory collection program may be loaded into the process, SO that the SO library, that is, the memory collection program, may be run simultaneously in the running process of the process.
In step 102, the occupied memory of the plurality of data objects corresponding to the process is acquired through the injected memory acquisition program.
Here, the data object refers to an object created by a specific data structure, for example, the programming language of a process is Lua language, and the data object may refer to a Table data object created by a Table data structure, wherein the Table data structure is a data structure commonly used in the Lua language.
For the memory collection program injected into the process, all or part of the data objects corresponding to the process can be collected to occupy the memory in the running process of the injected memory collection program, wherein the memory occupied by the data objects refers to the memory space occupied by the data objects in the electronic device. It should be noted that the data object to be subjected to memory collection may be preset in the memory collection program according to the creation time, the priority, and the like, that is, the memory collection of the data object that is more likely to cause the memory leakage problem/is more important may be set. For example, a data object whose creation time is later than a set time may be used as a data object that needs to be acquired by the memory; or taking the data object with the priority higher than the set priority as the data object needing memory collection.
In some embodiments, before step 102, further comprising: determining a process identifier corresponding to a process; determining the address of an execution state machine corresponding to the process according to the process identifier; the execution state machine is used for managing data object information corresponding to the process, and the data object information comprises a plurality of data objects; determining the address of the data object information according to the process identifier and the address of the execution state machine; the memory occupied by the plurality of data objects corresponding to the injected memory acquisition program acquisition process can be realized in such a way that: and calling the address of the data object information through the injected memory acquisition program so as to acquire the memory occupied by a plurality of data objects in the data object information.
In order to implement memory collection, in the embodiment of the present application, a Process Identifier (ID) corresponding to a Process may be first determined, for example, in a Linux operating system, a Process identifier of each Process running in the Linux operating system may be determined through a Process Status (PS) command. Then, the address of the execution state machine corresponding to the process is determined according to the process identifier, for example, for each process running in the electronic device, a corresponding relationship between the process identifier corresponding to the process and the address of the execution state machine corresponding to the process may be pre-established, so that after the process identifier corresponding to the process requiring memory management is obtained, the address of the execution state machine corresponding to the process identifier may be determined according to the pre-established corresponding relationship. Furthermore, the address of the data object information may be determined according to the process identifier and the address of the execution state machine, for example, the execution state machine is preset with a function of outputting the address of the data object information, and after the address of the execution state machine is obtained, the execution state machine may be triggered to output the address of the data object information corresponding to the process identifier.
On the basis of obtaining the address of the data object information, the address of the data object information can be called through an injected memory acquisition program so as to acquire the memory occupied by all or part of the data objects in the data object information. In the above manner, the success rate of memory collection can be improved by sequentially determining the address of the process identifier-execution state machine-the address of the data object information.
In some embodiments, the data objects include multiple types of data; the memory occupied by the plurality of data objects corresponding to the injected memory acquisition program acquisition process can be realized in such a way that: executing the following processing through the injected memory acquisition program: aiming at any data object corresponding to the process, acquiring an occupied memory of data of a set type included by any data object to serve as the occupied memory of any data object; wherein the setting type is at least one of a plurality of types.
The data objects corresponding to the processes may include multiple types of data, and in an actual application scenario, the data objects are influenced by part of types of data, which easily causes memory leakage; other types of data have less influence on the data object and are not easy to cause memory leakage. Therefore, in the embodiment of the present application, for a data object corresponding to a process, an occupied memory of data of a set type included in the data object may be acquired by an injected memory acquisition program to serve as the occupied memory of the data object, where the set type is at least one of multiple types; the set type of data is more likely to cause memory leakage in the data object than other types of data included in the data object. Taking a Table data object corresponding to the Lua process as an example, the data of the setting type may include a meta Table (Metatable), an array, and a Hash Table (Hash), which, of course, does not constitute a limitation to the embodiment of the present application, and the setting type may be specifically set according to a requirement in an actual application scenario.
It should be noted that, when the set type is only one type, the occupied memory of the data of the set type included in the data object may be directly used as the occupied memory of the data object; when the setting type includes multiple types, the memory occupied by the data object corresponding to the multiple types of data included in the data object can be summed to obtain the memory occupied by the data object. By the mode, the workload of memory collection can be reduced, and the memory management efficiency is improved while the memory management effect is ensured. Of course, in some embodiments, a set type may also refer to all types included in a data object.
In some embodiments, the injecting the memory capture program into the running process may be implemented in such a manner that: loading a shared object library corresponding to the memory acquisition program to a process; after step 102, the method further comprises: and unloading the shared object library corresponding to the memory acquisition program in the process.
Here, after the memory collection is completed in step 102, the memory collection program injected into the process may be unloaded to avoid interfering with the operation of the process. For example, an SO library corresponding to the memory collection program may be loaded into the process to realize the injection of the memory collection program, where the loading manner is not limited, for example, when the process runs in the Linux operating system, the SO library may be loaded through a dlopen function in a hookso tool, and the hookso tool is a tool for injecting, modifying, and searching a Linux dynamic link library; when the memory collection is completed, the SO library corresponding to the memory collection program may be unloaded in the process, that is, the memory collection program is unloaded, wherein the unloading manner is also not limited, for example, when the process runs in the Linux operating system, the SO library may be unloaded by a dll function in the hookso tool.
It should be noted that, in the embodiment of the present application, the step 102 is executed at different times, i.e., is executed multiple times. Therefore, for each memory collection process (i.e., each time step 102 is executed), the operations of loading the SO bank and unloading the SO bank can be executed, SO that the interference to the process can be further reduced; alternatively, the SO bank may be loaded during the first memory collection process, and the loading state of the SO bank may be maintained until the last memory collection process, that is, the SO bank may be unloaded again when the last memory collection is completed, thereby reducing the workload. Any of the above-described load-unload modes may be applied according to the requirements in the actual application scenario.
In step 103, the difference in occupied memory between the occupied memories of the data objects at different times is determined.
In this embodiment of the application, step 102 may be executed at different times, for example, at a first operation time and a second operation time, where the second operation time is later than the first operation time, and the first operation time and the second operation time may be set according to an actual application scenario. Therefore, for each data object which has been subjected to memory collection, the occupied memory of the data object at the second operation time can be subtracted from the occupied memory of the data object at the first operation time to obtain the occupied memory difference corresponding to the data object, and the occupied memory difference is the increase of the occupied memory from the first operation time to the second operation time.
In step 104, a data object with a memory leak is determined among the plurality of data objects according to the difference of the occupied memories corresponding to the plurality of data objects, respectively.
For each data object that has been acquired by the memory, the corresponding difference in occupied memory can be obtained in step 103. The larger the difference of the occupied memory is, the higher the possibility that the corresponding data object has memory leakage is, so that the data object with memory leakage can be determined from all the data objects which are acquired by the memory based on the difference of the occupied memory, that is, the root cause of the memory leakage is located.
For the determined data object with memory leak, memory release processing may be performed, that is, all or part of the memory occupied by the data object with memory leak is released, for example, the data object with memory leak may be directly deleted, and the manner of memory release processing is not limited thereto. After the memory release processing, the running efficiency and the response speed of the process can be improved, the process can be guaranteed to smoothly realize corresponding functions, and the memory can be allocated to a useful process (or data object) for the electronic equipment.
As shown in fig. 3A, on one hand, the embodiment of the present application acquires occupied memory in a non-intrusive manner, so that accuracy of the acquired occupied memory can be ensured; on the other hand, the data object with memory leakage is determined by occupying memory difference, so that the precision of positioning the memory leakage source can be improved, and the memory management effect is improved.
In some embodiments, referring to fig. 3B, fig. 3B is a schematic flowchart of a memory management method provided in the embodiment of the present application, step 102 shown in fig. 3A may be updated to step 201, and in step 201, the following processing is performed by an injected memory collection program: and traversing a plurality of data objects in the reference linked list according to the reference sequence of the reference linked list, and acquiring the occupied memory of the traversed data objects.
In the embodiment of the application, the process can construct the reference linked list during running, the reference linked list comprises a plurality of data objects with reference relations, and the reference linked list can be used as a part of data object information. For the situation, after the memory collection program is injected into the process, the injected memory collection program can be used for performing traversal processing on the plurality of data objects in the reference linked list, and the memory occupied by the traversed data objects is collected in the traversal processing process. The first traversed data object is a root data object (root node), and the root data object is also the first created data object corresponding to the process.
The order of traversal processing may be a reference order in a reference linked list, and taking a reference chain of "a.b.c.d.eee" as an example, the reference order is a-b-c-d-eee, that is, the first traversal is a, where the reference chain is a chain structure constructed according to a plurality of data objects having reference relationships, and the reference linked list includes at least one reference chain. When the reference linked list includes a plurality of reference chains, traversal processing may be performed according to a Depth First Search (DFS) or a Breadth First Search (BFS) manner.
In some embodiments, the reference linked list comprises at least one reference chain; in the process of traversing a plurality of data objects in the reference linked list according to the reference sequence of the reference linked list, the method further comprises the following steps: when the reference of the target reference chain where the traversed data object is located is finished, stopping traversing the target reference chain; and when the target reference chain where the traversed data object is located is not referenced, traversing the data object which is not traversed in the target reference chain according to the reference sequence of the target reference chain.
Here, for each data object in the reference chain table, the reference end identifier corresponding to the data object may be created according to the position of the data object in the reference chain where the data object is located. For example, when the position of the data object in the reference chain is not the last one, the reference end identifier corresponding to the data object may be configured as a first identifier, which is used to indicate that the reference chain is unreferenced; when the position of the data object in the reference chain is the last one, the end-of-reference identifier corresponding to the data object may be configured as a second identifier, which is used to indicate that the reference chain has ended. Wherein the first identifier is, for example, a value 1, and the second identifier is, for example, a value 0, which is, of course, only an example here.
In the process of traversal processing, a reference end identifier corresponding to the traversed data object may be determined, and when the reference end identifier indicates that the reference of the target reference chain in which the traversed data object is located is ended (if the reference end identifier is a second identifier), traversal processing on the target reference chain is stopped; when the reference end identifier indicates that the traversed data object is not referenced and ended by the target reference chain (if the reference end identifier is the first identifier), the traversal processing is continued on the data object which is not traversed in the target reference chain according to the reference sequence of the target reference chain. The target reference chain refers to a reference chain in which the traversed data object is located, and has no other special meaning. Through the mode, the efficiency of traversal processing can be improved.
In fig. 3B, after step 104 shown in fig. 3A, in step 202, a memory release process may be performed on the data object with a memory leak.
When the data object with the memory leak is determined, the data object with the memory leak can be subjected to memory release processing to release all or part of the memory occupied by the data object with the memory leak. For example, a data object with a memory leak may be deleted directly, but the manner of the memory release process is not limited thereto.
In step 203, the data objects that have undergone the memory release process are masked in the reference linked list.
Here, the data object that has been subjected to the memory release process may be masked in the reference linked list. According to different memory release processing modes, the shielding mode can also be changed correspondingly, for example, if the memory release processing can ensure that the data object with memory leakage does not have memory leakage, the shielding can be permanent shielding, for example, the data object with memory release processing in the reference linked list is directly deleted; for another example, in the case that it cannot be guaranteed that a data object with a memory leak will not have a memory leak after the memory release process, the masking may be a temporary masking, for example, masking a data object that has been subjected to the memory release process within a set masking time period (e.g., within one week). Therefore, the reference linked list can be simplified, the workload can be reduced when memory management is performed according to the reference linked list next time, and the memory management efficiency is improved.
As shown in fig. 3B, in the embodiment of the present application, the memory collection is performed according to the reference linked list, so that the accuracy and integrity of the memory collection can be ensured; and the data objects subjected to the memory release processing are shielded in the reference linked list, so that the efficiency of next memory management can be improved.
In some embodiments, referring to fig. 3C, fig. 3C is a schematic flowchart of a memory management method provided in the embodiments of the present application, and step 104 shown in fig. 3A may be implemented by step 301, or implemented by step 302 to step 303, which will be described with reference to each step.
In step 301, the data object whose occupied memory difference satisfies the memory leak condition is determined as the data object with the memory leak.
The embodiment of the application provides two modes for positioning the memory leakage source, wherein the first mode is intelligent positioning. For example, a set memory leak condition is obtained, and a data object occupying a memory difference and meeting the memory leak condition is determined as a data object with a memory leak. The memory leak condition may be set according to an actual application scenario, for example, the difference between the occupied memories is greater than a difference threshold.
In step 302, memory occupation differences corresponding to the plurality of data objects and the plurality of data objects are presented.
The second way of locating the source of the memory leak is a man-machine interaction way, for example, a plurality of data objects acquired by the memory and the difference of the occupied memory corresponding to the data objects can be presented as follows:
data object 1 Difference of occupied memory corresponding to data object 1
Data object 2 Difference in occupied memory corresponding to data object 2
Data object 3 Difference in occupied memory corresponding to data object 3
…… ……
The order of presentation is not limited, and may be any order, for example; also, for example, to help a user locate the root cause of a memory leak more quickly, the memory usage differences may be in order of magnitude.
When presenting the data object, at least one of an address (e.g., an address in the form of a pointer) of the data object and the query index may be presented, and of course, other content corresponding to the data object may also be presented, such as an address of a referenced parent data object, and the content corresponding to the data object may be collected together with the occupied memory. The query index may be regarded as a name of the data object, used for distinguishing different data objects, and also used for a user to query the data object.
In some embodiments, the above-mentioned difference in occupied memory corresponding to the presentation of the plurality of data objects and the plurality of data objects may be implemented in such a manner that: for any one data object, the following processing is performed: determining a reference chain where any one data object is located according to the address of a parent data object referenced by any one data object; fusing the query index corresponding to any one data object with the query indexes corresponding to at least part of the data objects in the reference chain to obtain a fused query index corresponding to any one data object; wherein at least some of the data objects are distinct from any one of the data objects; presenting fusion query indexes corresponding to any one data object and occupied memory difference; the address of the parent data object referred by any data object and the query index corresponding to any data object are acquired by an injected memory acquisition program.
Here, an example of any data object to be presented will be described. When the occupied memory of any one data object is acquired through the injected memory acquisition program, the address of any one data object, the address of a parent data object referred by any one data object and the query index corresponding to any one data object can be acquired.
Because the acquired query index may be incomplete, and the user cannot identify the corresponding data object according to the query index, the reference chain where the any one data object is located may be determined according to the address of the parent data object referenced by the any one data object, and for the determined reference chain, the starting point (i.e., the first data object) is the root data object, and the ending point is the any one data object. For example, for a data object with query index c, the referenced parent data object is the data object with query index b; for a data object with query index b, whose referenced parent data object is the data object with query index a, when the data object with query index a is the root data object, the reference chain can be determined to be "a.b.c" (the reference chain identifies the data object with the query index). It should be noted that, in the case that there is a reference linked list corresponding to a process, the reference chain in which the any data object is located may also be determined in the reference linked list.
Then, fusion processing (for example, splicing processing) is performed on the query index corresponding to the any one data object and the query indexes corresponding to at least part of the data objects in the reference chain to obtain a fusion query index corresponding to the any one data object, and the process is to update the query index corresponding to the any one data object. At least a part of the data objects are different from the arbitrary one of the data objects, for example, all the data objects different from the arbitrary one of the data objects. Taking the above example as an example again, for the data object with the query index c, after the reference chain "a.b.c" is obtained, a.b.c may be used as the fused query index of the data object, so that the fused query index embodies the complete reference relationship from the root data object. Finally, the fusion query index corresponding to any one data object and the occupied memory difference corresponding to any one data object can be presented, so that the accuracy and the integrity of the presented content can be improved, and the user can be helped to better identify the data object due to the higher identification degree of the fusion query index. For example, the presentation results may be as follows:
fused query index corresponding to data object 1 Difference of occupied memory corresponding to data object 1
Fused query index corresponding to data object 2 Difference in occupied memory corresponding to data object 2
Fused query index corresponding to data object 3 Difference in occupied memory corresponding to data object 3
…… ……
In step 303, in response to the triggering operation for the plurality of data objects, the triggered data object is determined to be a data object with a memory leak.
For example, the user may determine which data object or data objects displayed have memory leakage according to actual conditions, and trigger the corresponding data objects. When the electronic equipment receives the triggering operation aiming at the plurality of data objects, the data objects triggered by the triggering operation are determined as the data objects with memory leakage. The form of the trigger operation is not limited in the embodiments of the present application, and for example, the trigger operation may be a touch operation (such as a click, a long press, and the like) or a non-touch operation (such as a voice input, a gesture input, and the like).
As shown in fig. 3C, the embodiment of the present application provides two ways to locate the memory leakage source, which can improve the flexibility and can be applied in any way according to the requirements in the actual application scenario.
In some embodiments, referring to fig. 3D, fig. 3D is a schematic flowchart of a memory management method provided in this embodiment, and after step 102 shown in fig. 3A, in step 401, for any one data object, a reference chain where the any one data object is located may also be determined according to an address of a parent data object referred to by the any one data object; the address of the father data object referenced by any one data object is acquired by the injected memory acquisition program.
In the embodiment of the application, after the memory acquisition, a visual memory distribution map can be presented. In step 102, in addition to acquiring the occupied memory of the data object corresponding to the process through the injected memory acquisition program, the address of the data object and the address of the parent data object referred by the data object may also be acquired. In this way, for any data object, the reference chain in which the any data object is located may be determined according to the address of the parent data object referred to by the any data object, and for the determined reference chain, the starting point is the root data object and the ending point is the any data object.
It should be noted that, in the case that there is a reference linked list corresponding to a process, the reference chain in which the any data object is located may also be determined in the reference linked list.
In step 402, a plurality of data objects in the reference chain are presented in a progressive relationship according to the reference order of the reference chain; wherein, the presentation area corresponding to the data object is positively correlated with the occupied memory; the progressive relationship includes any one of an inclusion relationship and a parallel relationship.
Here, a plurality of data objects in the reference chain are presented in a progressive relationship according to the reference sequence from the start point to the end point in the reference chain, and a memory distribution map is obtained. The presentation data object may refer to at least one of an address (e.g., an address in the form of a pointer) of the presentation data object and a query index, and may also present other content corresponding to the data object; the presentation area corresponding to the data object is positively correlated with the occupied memory; the rendering area corresponding to a data object may be the sum of the rendering areas corresponding to all data objects that reference the data object.
The progressive relationship may be an inclusion relationship, that is, an area corresponding to a next data object in the reference chain is included in an area corresponding to a previous data object, where the shape of the area is not limited, and may be, for example, a matrix or a circle. For example, if the reference chain corresponding to the process includes "data object 1, data object 2" and "data object 1, data object 3", a memory distribution map generated according to the inclusion relationship may be as shown in fig. 4A, an area (i.e., a presentation area) of a region corresponding to the data object 1 is a sum of an area of a region corresponding to the data object 2 and an area of a region corresponding to the data object 3, and since the occupied memory of the data object 3 is greater than the occupied memory of the data object 2, the area of the region corresponding to the data object 3 is also greater than the area of the region corresponding to the data object 2.
The progressive relationship may also be a parallel relationship, for example, if the parallel relationship in a specific direction includes the reference chain corresponding to the process as "data object 1, data object 2" and "data object 1, data object 3", for example, then the memory distribution map generated according to the parallel relationship may be as shown in fig. 4B. In fig. 4B, the lengths of the regions corresponding to all the data objects in the y axis are the same, and the area of the region corresponding to the data object can be represented by the length of the region in the x axis, that is, in fig. 4B, the progressive relationship may be a parallel relationship in the y axis, and of course, the x axis and the y axis may be exchanged, that is, the progressive relationship may also be a parallel relationship in the x axis. Because the memory occupied by the data object 3 is greater than the memory occupied by the data object 2, the length of the region corresponding to the data object 3 on the x-axis is also greater than the length of the region corresponding to the data object 2 on the x-axis.
By the mode, the generated memory distribution map can accurately and effectively reflect the reference sequence of the reference chain, simultaneously reflect the memory occupied by the data objects, and facilitate a user to optimize a process according to the memory distribution map, such as modifying codes of some data objects or deleting some data objects.
In some embodiments, after step 402, further comprising: and in response to the triggering operation aiming at the plurality of data objects, determining the triggered data objects as the data objects with memory leaks.
After presenting the plurality of data objects in the reference chain in the progressive relationship, the electronic device may receive a trigger operation for the plurality of data objects, and determine the data object triggered by the trigger operation as the data object with the memory leak. Thus, the source of the memory leak can be located by another angle.
As shown in fig. 3D, in the embodiment of the present application, according to the reference sequence of the reference chain, the multiple data objects in the reference chain are presented in a progressive relationship, so that a user can quickly know the memory distribution of the multiple data objects corresponding to the process, and the user can optimize the process conveniently.
In some embodiments, referring to fig. 3E, fig. 3E is a schematic flowchart of a memory management method provided in this embodiment of the present application, where step 102 in fig. 3A may be updated to step 501, and in step 501, memory occupied by a plurality of data objects corresponding to a process is collected by an injected memory collection program at a first operation time and a second operation time of a virtual scene.
In this embodiment of the present application, the process may be used to run a virtual scene, where the type of the virtual scene is not limited, and for example, the virtual scene may be a game virtual scene. At a first operation time in the operation process of the virtual scene, acquiring occupied memory of a plurality of data objects corresponding to the process through the injected memory acquisition program; and at the second running time in the running process of the virtual scene, the memory occupied by the plurality of data objects corresponding to the progress is collected through the injected memory collection program.
The second running time is later than the first running time, and the first running time and the second running time can be set according to requirements in an actual application scene, which is not limited in the embodiment of the application.
In some embodiments, the virtual scene includes a first sub-scene and a second sub-scene; in this way, at the first running time and the second running time of the virtual scene, the memory occupied by the plurality of data objects corresponding to the process is acquired by the injected memory acquisition program: when the situation that the first sub-scene is switched to the second sub-scene is detected, the occupied memory of a plurality of data objects corresponding to the process is acquired through the injected memory acquisition program and is used as the occupied memory acquired at the first running time; when the switching from the second sub-scene to the first sub-scene is detected, the occupied memory of the plurality of data objects corresponding to the process is acquired through the injected memory acquisition program, and the occupied memory is used as the occupied memory acquired at the second running time.
For example, a time to switch from a first sub-scene to a second sub-scene of the virtual scene may be taken as a first operation time, and a time to switch from the second sub-scene back to the first sub-scene may be taken as a second operation time. When the current moment is detected to be a first running moment, acquiring occupied memory of a plurality of data objects corresponding to the progress through the injected memory acquisition program to serve as the occupied memory acquired at the first running moment; and when the current moment is detected to be the second running moment, acquiring occupied memory of a plurality of data objects corresponding to the process through the injected memory acquisition program to serve as the occupied memory acquired at the second running moment.
The first sub-scenario and the second sub-scenario may be set according to an actual application scenario, for example, set as sub-scenarios that are prone to memory leakage or more important sub-scenarios. For example, the first sub-scenario may be a sub-scenario waiting for the virtual task to be executed, the second sub-scenario may be a sub-scenario executing the virtual task, a time point when the virtual task starts to be executed is a time point when the first sub-scenario is switched to the second sub-scenario, and a time point when the virtual task is completed is a time point when the virtual task is switched back to the first sub-scenario from the second sub-scenario. Further, as an example of a virtual scene of military affair simulation, the virtual task may be a virtual military activity task; taking a game virtual scene as an example, the virtual task may refer to one game play, such as one game play in an FPS game, a TPS game, an MOBA game, or a multiplayer gunfight live game. By the method, the timing accuracy of memory management can be improved according to the characteristics of the virtual scene, and the root cause of memory leakage can be accurately positioned during sub-scene switching, so that the virtual scene can operate normally.
In fig. 3E, step 103 shown in fig. 3A may be updated to step 502, and in step 502, the difference between the first occupied memory and the second occupied memory of the data object is determined; the first occupied memory is the occupied memory collected at the first running time; the second occupied memory is the occupied memory collected at the second operation time.
Here, the occupied memory collected at the first operation time is named as a first occupied memory, and the occupied memory collected at the second operation time is named as a second occupied memory. For each data object that has undergone memory acquisition, the difference in the occupied memory corresponding to the data object can be obtained by subtracting the first occupied memory of the data object from the second occupied memory of the data object.
As shown in fig. 3E, in the embodiment of the present application, by performing memory collection at the first operation time and the second operation time of the virtual scene, effective memory management can be performed on the relevant processes of the virtual scene, so as to ensure that the virtual scene can operate normally.
Next, an exemplary application of the embodiments of the present application in an actual application scenario will be described. The embodiment of the application can be applied to various types of electronic devices, the electronic devices can be terminal devices or servers, and for convenience of understanding, a server running a virtual scene is taken as an example for explanation, and the virtual scene is realized through a Lua language. Wherein, the virtual scene is a game virtual scene of a multi-player gun battle living game, and can be other types of virtual scenes; the virtual scene may also be implemented by a programming language distinguished from the Lua language. It should be noted that the embodiments of the present application may be applied to software project engineering implemented based on any programming language, and the above examples of virtual scenarios are only for convenience of description.
In the embodiment of the present application, memory management may be performed on a process used for running a virtual scene in a server, and a process of memory management will be described in a step form.
Step 1): acquiring a process identifier, namely a process ID, of the Lua process needing memory management, and injecting a memory acquisition program into the Lua process according to the process ID, wherein the memory acquisition program can be a script or other codes. Therefore, the method can perform memory collection on the Lua process by running the injected memory collection program, and generate a memory information file in the current working directory (or called process working directory) of the Lua process, wherein the memory information file at least comprises occupied memories of a plurality of Table data objects corresponding to the Lua process, the Table data structures are common data structures in the Lua language, and the Table data objects are also data objects which need to perform memory collection in the embodiment of the application. When the operation of the memory collection program is completed (the memory information file is obtained), the memory collection program can be unloaded in the Lua process. And as for the subsequent operation, the subsequent operation can be executed offline by other programs (such as scripts), so as to ensure that the operation of the Lua process is not interfered, wherein the offline operation refers to the operation independent from the Lua process.
Step 2): generating a visual memory distribution map according to the memory information file obtained in the step 1). Step 2) may be performed by a program (e.g., a script) independent of the Lua process. An embodiment of the present application provides a schematic diagram of a memory distribution graph as shown in fig. 5, a plurality of Table data objects in a reference chain are presented in a progressive relationship according to a reference sequence of the reference chain, where a presentation area of a Table data object is positively correlated with an occupied memory, that is, the larger the occupied memory of the Table data object is, the larger an area (that is, a presentation area) of a corresponding region of the Table data object in the memory distribution graph is, for example, in fig. 5, lengths of regions corresponding to all Table data objects on a y axis are consistent, and the area of the region corresponding to the Table data object can be represented by the length of the region on an x axis. In fig. 5, the progressive relationship may be a parallel relationship on the y-axis.
In the memory distribution diagram, the presentation area of a certain Table data object may be the sum of the presentation areas of all Table data objects referring to the Table data object, so that the reference relationship can be embodied more accurately and effectively. For example, in fig. 5, the presentation area of a Table data object named "G. _ F" is the sum of the presentation areas of data objects named "activ.", "batc." and "tlo." in the next level (the next level juxtaposed in the y-axis direction).
Step 3): and (3) performing memory collection again according to the mode of the step 1) to obtain a second memory information file. The occupied memory difference corresponding to each Table data object in the Lua process can be obtained by comparing the two memory information files, the occupied memory difference files can be obtained by sequencing the plurality of Table data objects according to the sequence of the occupied memory difference from large to small, and the user can be helped to quickly locate the root cause of memory leakage by presenting the occupied memory difference files. The operation of comparing the two memory information files and the operation of sorting processing can be executed by a program (such as a script) independent of the Lua process. For ease of understanding, the following occupied memory difference file is shown:
Figure BDA0003040951920000131
in the above occupied memory difference file, key represents the name of the Table data object, and size represents the occupied memory difference corresponding to the Table data object.
On the implementation of the bottom layer, the embodiment of the application may include three steps: 1) acquiring the occupied memory of the Lua process, and acquiring twice if the leakage source of the memory needs to be positioned; 2) positioning a memory leakage source; 3) and generating a visual memory distribution map. As will be described in detail later.
1) And collecting the occupied memory of the Lua process.
In the embodiment of the application, the memory collection program can be injected into the Lua process in a non-invasive injection mode, so that the memory collection is realized. The embodiment of the present application provides a schematic diagram of memory collection as shown in fig. 6, which will be described with reference to each step.
Firstly, determining the process ID of the Lua process, taking the Linux operating system operated by the server as an example, the process ID of the Lua process operated in the Linux operating system can be obtained through a PS command.
And determining the address of the LuaV _ execute function corresponding to the Lua process, wherein the LuaV _ execute function corresponds to the execution State machine and is used for traversing the array of the binary operation codes to execute the relevant instructions of the Lua one by one, and the first parameter in the LuaV _ execute function is the address of the Lua _ State structural body. For example, the process ID may be called by GNU program DeBugger (GDB) tool in step (c) to obtain the address of the LuaV _ execute function.
And thirdly, determining the address of the Lua _ State structure corresponding to the Lua process according to the process ID and the address of the LuaV _ execute function, wherein the Lua _ State structure comprises State information of the Lua process running environment, and the State information at least comprises data object information. For example, the process ID and the address of the LuaV _ execute function may be called by a hookso tool to obtain the address of the Lua _ State struct, where the hookso tool is a tool for injecting, modifying, and searching a Linux dynamic link library.
And fourthly, injecting the memory collection program into the Lua process, for example, calling a process ID through a dlopen function in a hookso tool, SO as to load an SO library (which can be customized according to an actual application scene) of the memory collection program into the Lua process, thereby realizing injection.
Executing the injected memory collection program to collect the occupied memory of each data object corresponding to the Lua process, and printing the collection result to a memory information file for use in the subsequent steps. For example, the process ID and the address of the Lua _ State structure may be called by a hookso tool, so as to achieve memory collection by a memory collection program.
And sixthly, unloading the injected memory collection program, for example, calling the process ID and the address of the Lua _ State structure body through a dlclose function in the hookso tool SO as to unload the SO library of the memory collection program from the Lua process.
It should be noted that, during the process of memory Collection, a Garbage Collection (GC) Object linked list (corresponding to the above reference linked list) corresponding to the Lua process may be traversed. For the traversed Table data object, determining the memory occupied by the data of the set type in the multiple types of data included in the traversed Table data object through a memory occupation measurement function (such as a sizeof function), and taking the memory occupied by the data of the set type as the memory occupied by the traversed Table data object, wherein the set type is at least one of the multiple types and can be set according to the actual application scene. For example, the set type of data may include a meta table (Metatable), an array, and a Hash table (Hash). When the occupied memory of the set type data is used as the occupied memory of the traversed Table data object, the occupied memory of the set type data can also be stored (for example, stored in a memory information file), for example, the occupied memories corresponding to the element Table, the array and the hash Table can be stored, so that the data can be presented when needed.
2) And (5) positioning the leakage source of the memory.
And analyzing the obtained memory information file, and presenting the analyzed occupied memory difference file to help a user to locate the source of memory leakage. The embodiment of the present application provides a schematic diagram of memory leak analysis as shown in fig. 7, which will be described with reference to various steps.
Firstly, respectively carrying out memory collection at a first operation time and a second operation time of a virtual scene to obtain a memory information file corresponding to the first operation time and a memory information file corresponding to the second operation time, wherein the second operation time is later than the first operation time. The longer the interval between the first running time and the second running time is, the more serious the problem of memory leakage is, the more convenient the root cause of the memory leakage is to be located, and the first running time and the second running time can be set according to an actual application scene.
For example, when data is continuously inserted into a Table data object with a query index (i.e., name) a through while loop, the memory information file obtained after memory collection is as follows:
Figure BDA0003040951920000141
in the memory information file, memory information corresponding to each of the plurality of Table data objects is shown, and the memory information includes parameters such as pointer, key, has _ next, parent, is _ number-key, and size. Wherein pointer represents the address of the Table data object, and the address can be represented by a pointer form. The key represents the query index, i.e. name, of the Table data object, so that the user can query the Table data object conveniently. The has _ next corresponds to the above reference end identifier and is used for indicating whether the reference chain where the Table data object is located is referred to end, and when the value of the has _ next is 1, indicating that the reference chain where the Table data object is located is not referred to end, the reference chain can be continuously traversed; when the value of has _ next is 0, it indicates that the reference chain reference where the Table data object is located ends, and the traversal of the reference chain may be stopped. parent represents the address of the parent node of the Table data object (corresponding to the parent data object above), which may also be represented in the form of a pointer, e.g., in the reference chain "a.b.c.d.eee", eee's parent node is d, d's parent node is c, and so on. The is _ number-key is used for indicating whether the key of the Table data object is a number, and when the value of the is _ number-key is 1, the key of the Table data object is a number; when the value of is _ number-key is 0, the key indicating the Table data object is not a number.
And secondly, in order to facilitate comparison, the memory information file acquired each time can be analyzed to obtain a Map data object with a Map data structure. On this basis, because the Map data object has more memory information, the memory information in the Map data object can be merged to simplify the content in the Map data object and improve the subsequent comparison efficiency and the subsequent comparison accuracy. For example, if the Map data object includes a plurality of pieces of memory information identical to a pointer, only one piece of memory information may be retained in the plurality of pieces of memory information identical to the pointer during the merging process.
And thirdly, comparing the two Map data objects to obtain a new Map data object. For example, a Map data object corresponding to the first runtime is named Map data object 1, and a Map data object corresponding to the second runtime is named Map data object 2, then for each memory information in Map data object 2, a contrast index of the memory information may be obtained, and a size in the memory information having the same contrast index in Map data object 1 is subtracted from a size in the memory information to obtain a size in the memory information having the same contrast index in Map data object 3, where the contrast index may include at least one of pointer and key. Map data object 3 is a new Map data object, and in Map data object 3, size no longer indicates occupied memory, but indicates a difference in occupied memory.
And fourthly, sequencing the plurality of memory information in the Map data object 3 according to the sequence of the memory occupied difference from large to small, wherein the more the sequencing bit is, the more likely the memory leakage problem is caused to occur to the Table data object corresponding to the memory information at the front.
In Map data object 3, locating the root of memory leakage is not facilitated through pointer in the memory information, and meanwhile, the key in the memory information is easy to be incomplete, so that the reference chain of the Table data object can be determined according to parent in the memory information, and the key in the memory information and the keys of other Table data objects in the reference chain are fused to update the key in the memory information. The updated key is the above fused query index, which can embody a complete reference relationship starting from _ G, where _ G is a root node.
And sixthly, outputting the Map data object 3 to a memory difference file, and presenting the memory difference file to help a user to locate the source of the memory leakage, namely to locate which one or more Table data objects have the memory leakage. The memory difference file is as follows:
key:_R._LOADED._G,size:171288
key:_G.a,size:136528
key:_R._LOADED,size:107136
key:_G.a[718.000000],size:88
key:_G.a[96.000000],size:88
key:_G.a[737.000000],size:88
……
wherein, R _ load _ G is a registry of Lua, and is equivalent to a root node, and therefore is not an object for determining whether a memory leak occurs. By the memory difference file, the memory leak of the Table data object named _ G.a can be quickly and accurately determined.
3) And generating a visual memory distribution map.
For the memory information file acquired through the memory, a visual memory distribution map may be generated according to the memory information file, and the steps shown in fig. 8 will be described.
Firstly, after obtaining a memory information file through memory collection, the contents of the memory information file can be analyzed and added into a Map data object, and the memory information in the Map data object is merged.
Secondly, for each memory information in the Map data object, a complete reference chain starting from _ G can be established according to parent in the memory information so as to update key in the memory information.
And thirdly, inputting the updated key (i.e. the fusion query index) and size (i.e. the occupied memory) corresponding to each Table data object in the Map data objects into a visualization tool, and outputting a visualized memory distribution Map by the visualization tool, as shown in fig. 5. The type of the visualization tool is not limited in the embodiments of the present application, and may be a flamegraph tool, for example.
The embodiment of the application has at least the following technical effects: 1) injecting a memory acquisition program into the Lua process, and performing memory acquisition in a non-invasive manner to ensure that the original code of the Lua process cannot be changed, and simultaneously, the memory occupied by the memory acquisition program cannot be counted into the memory occupied by the Lua process; 2) the occupied memory of each Table data object in the Lua process can be collected, and compared with the referred times in the related technology, the root cause of memory leakage can be more accurately positioned by occupying the memory; 3) the visual memory distribution map can be provided, the root cause of memory leakage can be positioned, and meanwhile, the subsequent optimization can be facilitated, for example, the memory release processing is carried out on a certain or some Table data objects according to the memory distribution map; 4) by determining the difference of the occupied memories and sequencing according to the sequence of the difference of the occupied memories from large to small, the data object with memory leakage can be determined quickly and accurately; 5) can be applied to all Lua processes without distinction, i.e. there is no special requirement for the Lua process itself.
Continuing with the exemplary structure of the memory management device 455 provided by the present application implemented as software modules, in some embodiments, as shown in fig. 2, the software modules stored in the memory management device 455 of the storage 450 may include: an injection module 4551, configured to inject a memory collection program into an operating process; the acquisition module 4552 is configured to acquire, through an injected memory acquisition program, occupied memories of a plurality of data objects corresponding to a process; a difference determining module 4553, configured to determine a difference between occupied memories of the data objects at different times; the screening module 4554 is configured to determine a data object with memory leak among the multiple data objects according to the difference between the occupied memories of the multiple data objects.
In some embodiments, the data object information corresponding to the process includes a reference linked list, where the reference linked list includes a plurality of data objects having reference relationships; the acquisition module 4552 is further configured to: executing the following processing through the injected memory acquisition program: and traversing a plurality of data objects in the reference linked list according to the reference sequence of the reference linked list, and acquiring the occupied memory of the traversed data objects.
In some embodiments, the reference linked list comprises at least one reference chain; the acquisition module 4552 is further configured to: when the reference of the target reference chain where the traversed data object is located is finished, stopping traversing the target reference chain; and when the target reference chain where the traversed data object is located is not referenced, traversing the data object which is not traversed in the target reference chain according to the reference sequence of the target reference chain.
In some embodiments, the memory management device 455 further includes: the memory release module is used for performing memory release processing on the data object with memory leakage; and the shielding module is used for shielding the data object subjected to the memory release processing in the reference linked list.
In some embodiments, the memory management device 455 further includes an addressing module for: determining a process identifier corresponding to a process; determining the address of an execution state machine corresponding to the process according to the process identifier; the execution state machine is used for managing data object information corresponding to the process, and the data object information comprises a plurality of data objects; determining the address of the data object information according to the process identifier and the address of the execution state machine; the acquisition module 4552 is further configured to: and calling the address of the data object information through the injected memory acquisition program so as to acquire the memory occupied by a plurality of data objects in the data object information.
In some embodiments, the data objects include multiple types of data; the acquisition module 4552 is further configured to: executing the following processing through the injected memory acquisition program: aiming at any data object corresponding to the process, acquiring an occupied memory of data of a set type included by any data object to serve as the occupied memory of any data object; wherein the setting type is at least one of a plurality of types.
In some embodiments, the injection module 4551 is further configured to: loading a shared object library corresponding to the memory acquisition program to a process; the memory management device 455 further includes an unloading module, configured to unload the shared object library corresponding to the memory collection program in the process.
In some embodiments, the screening module 4554 is further configured to: any one of the following processes is performed: determining the data object occupying the memory difference and meeting the memory leakage condition as the data object with memory leakage; and presenting the difference of the occupied memory corresponding to the plurality of data objects and the plurality of data objects respectively, and determining the triggered data objects as the data objects with memory leakage in response to the triggering operation aiming at the plurality of data objects.
In some embodiments, the screening module 4554 is further configured to: for any one data object, the following processing is performed: determining a reference chain where any one data object is located according to the address of a parent data object referenced by any one data object; fusing the query index corresponding to any one data object with the query indexes corresponding to at least part of the data objects in the reference chain to obtain a fused query index corresponding to any one data object; wherein at least some of the data objects are distinct from any one of the data objects; presenting fusion query indexes corresponding to any one data object and occupied memory difference; the address of the parent data object referred by any data object and the query index corresponding to any data object are acquired by an injected memory acquisition program.
In some embodiments, the memory management device 455 further comprises a presentation module for: aiming at any data object, determining a reference chain where the any data object is located according to the address of a parent data object referenced by the any data object; the address of a father data object quoted by any one data object is acquired by an injected memory acquisition program; presenting a plurality of data objects in the reference chain in a progressive relationship according to the reference order of the reference chain; wherein, the presentation area corresponding to the data object is positively correlated with the occupied memory; the progressive relationship includes any one of an inclusion relationship and a parallel relationship.
In some embodiments, the process is used to run a virtual scene; the acquisition module 4552 is further configured to: at a first running time and a second running time of the virtual scene, respectively acquiring occupied memory of a plurality of data objects corresponding to a process through an injected memory acquisition program; a difference determination module 4553, further configured to: determining the difference of the occupied memory between the first occupied memory and the second occupied memory of the data object; the first occupied memory is the occupied memory collected at the first running time; the second occupied memory is the occupied memory collected at the second operation time.
In some embodiments, the virtual scene includes a first sub-scene and a second sub-scene; the acquisition module 4552 is further configured to: when the situation that the first sub-scene is switched to the second sub-scene is detected, the occupied memory of a plurality of data objects corresponding to the process is acquired through the injected memory acquisition program and is used as the occupied memory acquired at the first running time; when the switching from the second sub-scene to the first sub-scene is detected, the occupied memory of the plurality of data objects corresponding to the process is acquired through the injected memory acquisition program, and the occupied memory is used as the occupied memory acquired at the second running time.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions (i.e., executable instructions) stored in a computer readable storage medium. The processor of the electronic device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the electronic device executes the memory management method described in this embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions, where the executable instructions are stored, and when executed by a processor, the executable instructions cause the processor to execute a memory management method provided by embodiments of the present application, for example, the memory management method shown in fig. 3A, fig. 3B, fig. 3C, fig. 3D, and fig. 3E.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one electronic device or on multiple electronic devices located at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (15)

1. A memory management method, the method comprising:
injecting a memory acquisition program into a running process;
acquiring occupied memory of a plurality of data objects corresponding to the process through the injected memory acquisition program;
determining the difference of occupied memory of the data object at different moments;
and determining the data objects with memory leakage in the plurality of data objects according to the difference of the occupied memories corresponding to the plurality of data objects respectively.
2. The method of claim 1, wherein the data object information corresponding to the process comprises a reference linked list, wherein the reference linked list comprises a plurality of data objects with reference relationships;
the acquiring, by the injected memory acquisition program, occupied memories of the plurality of data objects corresponding to the processes includes:
executing the following processing through the injected memory acquisition program:
and traversing a plurality of data objects in the reference linked list according to the reference sequence of the reference linked list, and acquiring the occupied memory of the traversed data objects.
3. The method of claim 2, wherein the reference linked list comprises at least one reference chain; in the process of performing traversal processing on the plurality of data objects in the reference linked list according to the reference order of the reference linked list, the method further includes:
when the reference of the target reference chain where the traversed data object is located is finished, stopping traversing the target reference chain;
and when the target reference chain where the traversed data object is located is not referenced, traversing the data object which is not traversed in the target reference chain according to the reference sequence of the target reference chain.
4. The method according to claim 2, wherein after determining, according to the difference in occupied memory corresponding to each of the plurality of data objects, that there is a memory leak in the data object, the method further comprises:
performing memory release processing on the data object with memory leakage;
and shielding the data object subjected to the memory release processing in the reference linked list.
5. The method of claim 1, wherein before the capturing, by the injected memory capture program, memory usage of the plurality of data objects corresponding to the process, the method further comprises:
determining a process identifier corresponding to the process;
determining the address of an execution state machine corresponding to the process according to the process identifier; the execution state machine is used for managing data object information corresponding to the process, and the data object information comprises a plurality of data objects;
determining the address of the data object information according to the process identification and the address of the execution state machine;
the acquiring, by the injected memory acquisition program, occupied memories of the plurality of data objects corresponding to the processes includes:
and calling the address of the data object information through the injected memory acquisition program so as to acquire the memory occupied by a plurality of data objects in the data object information.
6. The method of claim 1, wherein the data object comprises multiple types of data; the acquiring, by the injected memory acquisition program, occupied memories of the plurality of data objects corresponding to the processes includes:
executing the following processing through the injected memory acquisition program:
acquiring an occupied memory of data of a set type included in any data object aiming at any data object corresponding to the process, wherein the occupied memory is used as the occupied memory of any data object;
wherein the setting type is at least one of the plurality of types.
7. The method of claim 1, wherein injecting the memory harvesting program into the running process comprises:
loading a shared object library corresponding to the memory acquisition program to the process;
after the injected memory collection program collects the occupied memory of the plurality of data objects corresponding to the process, the method further includes:
and unloading the shared object library corresponding to the memory collection program in the process.
8. The method according to any one of claims 1 to 7, wherein the determining, according to the difference in occupied memory corresponding to each of the plurality of data objects, a data object with a memory leak among the plurality of data objects includes:
any one of the following processes is performed:
determining the data object occupying the memory difference and meeting the memory leakage condition as the data object with memory leakage;
and presenting the difference of the occupied memory corresponding to the data objects and the data objects respectively, and determining the triggered data objects as the data objects with memory leakage in response to the triggering operation aiming at the data objects.
9. The method of claim 8, wherein the presenting the plurality of data objects and the difference in memory usage corresponding to each of the plurality of data objects comprises:
for any one data object, the following processing is performed:
determining a reference chain where any one data object is located according to the address of a parent data object referenced by the any one data object;
fusing the query index corresponding to any one data object with the query indexes corresponding to at least part of data objects in the reference chain to obtain a fused query index corresponding to any one data object; wherein the at least partial data object is distinct from the arbitrary one data object;
presenting the fusion query index corresponding to any one data object and the difference of occupied memory;
the address of the parent data object referred by any one data object and the query index corresponding to any one data object are acquired by the injected memory acquisition program.
10. The method according to any one of claims 1 to 7, wherein after the capturing, by the injected memory capture program, the occupied memory of the plurality of data objects corresponding to the process, the method further comprises:
aiming at any data object, determining a reference chain where the any data object is located according to the address of a parent data object referenced by the any data object; the address of the father data object quoted by any one data object is acquired by the injected memory acquisition program;
presenting a plurality of data objects in the reference chain in a progressive relationship according to the reference order of the reference chain; wherein the presentation area corresponding to the data object is positively correlated with the occupied memory; the progressive relationship includes any one of an inclusion relationship and a parallel relationship.
11. The method of any one of claims 1 to 7, wherein the process is configured to run a virtual scene; the acquiring, by the injected memory acquisition program, occupied memories of the plurality of data objects corresponding to the processes includes:
acquiring occupied memories of a plurality of data objects corresponding to the processes through the injected memory acquisition programs at a first running time and a second running time of the virtual scene respectively;
the determining the difference of the occupied memories of the data objects at different time points includes:
determining the difference of the occupied memory between the first occupied memory and the second occupied memory of the data object;
the first occupied memory is the occupied memory collected at the first running time; the second occupied memory is the occupied memory acquired at the second operation time.
12. The method of claim 11, wherein the virtual scene comprises a first sub-scene and a second sub-scene; the step of acquiring occupied memories of a plurality of data objects corresponding to the processes through the injected memory acquisition programs at the first running time and the second running time of the virtual scene respectively comprises the following steps:
when the situation that the first sub-scene is switched to the second sub-scene is detected, acquiring occupied memories of a plurality of data objects corresponding to the process through the injected memory acquisition program to serve as the occupied memories acquired at the first running time;
and when the second sub-scene is detected to be switched to the first sub-scene, acquiring occupied memories of a plurality of data objects corresponding to the processes through the injected memory acquisition program to serve as the occupied memories acquired at a second running time.
13. A memory management device, the device comprising:
the injection module is used for injecting the memory acquisition program into the running process;
the acquisition module is used for acquiring the occupied memory of the data objects corresponding to the process through the injected memory acquisition program;
the difference determining module is used for determining the difference of the occupied memories of the data objects at different moments;
and the screening module is used for determining the data objects with memory leakage in the plurality of data objects according to the difference of the occupied memories corresponding to the plurality of data objects respectively.
14. An electronic device, comprising:
a memory for storing executable instructions;
a processor, configured to execute the executable instructions stored in the memory, to implement the memory management method according to any one of claims 1 to 12.
15. A computer-readable storage medium storing executable instructions for implementing the memory management method of any one of claims 1 to 12 when executed by a processor.
CN202110457259.9A 2021-04-27 2021-04-27 Memory management method and device, electronic equipment and computer readable storage medium Pending CN113157455A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110457259.9A CN113157455A (en) 2021-04-27 2021-04-27 Memory management method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110457259.9A CN113157455A (en) 2021-04-27 2021-04-27 Memory management method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113157455A true CN113157455A (en) 2021-07-23

Family

ID=76871332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110457259.9A Pending CN113157455A (en) 2021-04-27 2021-04-27 Memory management method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113157455A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117648200A (en) * 2024-01-29 2024-03-05 龙芯中科技术股份有限公司 Memory management method, memory management device, electronic equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106610892A (en) * 2015-10-23 2017-05-03 腾讯科技(深圳)有限公司 Memory leak detecting method and device
WO2019091244A1 (en) * 2017-11-07 2019-05-16 晶晨半导体(上海)股份有限公司 Linux kernel-based memory leakage detection method
CN110231994A (en) * 2019-06-20 2019-09-13 深圳市腾讯网域计算机网络有限公司 Memory analysis method, apparatus and computer readable storage medium
CN111858317A (en) * 2020-06-30 2020-10-30 北京金山云网络技术有限公司 Method and device for detecting memory leakage
CN111966603A (en) * 2020-09-04 2020-11-20 网易(杭州)网络有限公司 Memory leak detection method and device, readable storage medium and electronic equipment
CN112214394A (en) * 2020-09-02 2021-01-12 深圳市优必选科技股份有限公司 Memory leak detection method, device and equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106610892A (en) * 2015-10-23 2017-05-03 腾讯科技(深圳)有限公司 Memory leak detecting method and device
WO2019091244A1 (en) * 2017-11-07 2019-05-16 晶晨半导体(上海)股份有限公司 Linux kernel-based memory leakage detection method
CN110231994A (en) * 2019-06-20 2019-09-13 深圳市腾讯网域计算机网络有限公司 Memory analysis method, apparatus and computer readable storage medium
CN111858317A (en) * 2020-06-30 2020-10-30 北京金山云网络技术有限公司 Method and device for detecting memory leakage
CN112214394A (en) * 2020-09-02 2021-01-12 深圳市优必选科技股份有限公司 Memory leak detection method, device and equipment
CN111966603A (en) * 2020-09-04 2020-11-20 网易(杭州)网络有限公司 Memory leak detection method and device, readable storage medium and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117648200A (en) * 2024-01-29 2024-03-05 龙芯中科技术股份有限公司 Memory management method, memory management device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
US10621068B2 (en) Software code debugger for quick detection of error root causes
CN102667730B (en) Design time debugging
CN105740144B (en) A kind of automated testing method and system of Android mobile terminal
CN110457211B (en) Script performance test method, device and equipment and computer storage medium
TW201537343A (en) Recognition application scenarios, power management method, apparatus and terminal equipment
CN102142016A (en) Cross-browser interactivity recording, playback and editing
CN107608609B (en) Event object sending method and device
CN111389014A (en) Game resource data monitoring method and device, computer equipment and storage medium
CN112691381B (en) Rendering method, device and equipment of virtual scene and computer readable storage medium
CN111258680B (en) Resource loading method and device, storage medium and electronic device
CN112947969A (en) Page off-screen rendering method, device, equipment and readable medium
CN113157455A (en) Memory management method and device, electronic equipment and computer readable storage medium
CN112783660B (en) Resource processing method and device in virtual scene and electronic equipment
CN114581580A (en) Method and device for rendering image, storage medium and electronic equipment
CN112131112B (en) Operation information acquisition method and device, storage medium and electronic equipment
CN111562952B (en) Dynamic loading method and dynamic loading device for double-core intelligent ammeter management unit
CN112817817A (en) Buried point information query method and device, computer equipment and storage medium
CN111880804A (en) Application program code processing method and device
CN112631949B (en) Debugging method and device, computer equipment and storage medium
CN114706767A (en) Code coverage rate acquisition method, device and equipment
CN114706581A (en) Image analysis method, image analysis device, computer equipment and storage medium
CN113110846A (en) Method and device for acquiring environment variable
CN114090434A (en) Code debugging method and device, computer equipment and storage medium
CN112451967A (en) Game interaction method and device based on 3D screen interaction and computer equipment
CN112463626A (en) Memory leak positioning method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40048711

Country of ref document: HK