CN112084024B - Memory monitoring method, device, medium and electronic equipment - Google Patents

Memory monitoring method, device, medium and electronic equipment Download PDF

Info

Publication number
CN112084024B
CN112084024B CN202010896417.6A CN202010896417A CN112084024B CN 112084024 B CN112084024 B CN 112084024B CN 202010896417 A CN202010896417 A CN 202010896417A CN 112084024 B CN112084024 B CN 112084024B
Authority
CN
China
Prior art keywords
memory
thread
information
instruction address
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010896417.6A
Other languages
Chinese (zh)
Other versions
CN112084024A (en
Inventor
朴英敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010896417.6A priority Critical patent/CN112084024B/en
Publication of CN112084024A publication Critical patent/CN112084024A/en
Application granted granted Critical
Publication of CN112084024B publication Critical patent/CN112084024B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/302Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a software system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3051Monitoring arrangements for monitoring the configuration of the computing system or of the computing system component, e.g. monitoring the presence of processing resources, peripherals, I/O links, software programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/362Software debugging
    • G06F11/366Software debugging using diagnostics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Mathematical Physics (AREA)
  • Computer Hardware Design (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The disclosure provides a memory monitoring method, a memory monitoring device, a medium and electronic equipment. The method comprises the following steps: in the running program, acquiring unique identity information of a thread; acquiring thread stack information of a corresponding thread based on the unique identity information; acquiring a memory information set; matching the call instruction address of each information unit in the memory information set with the thread stack information of each thread respectively, and determining the matched call instruction address of the corresponding thread; acquiring the matching application memory size of the corresponding thread from the information unit of the memory information set based on the matching call instruction address respectively; and respectively calculating the sum of the memory size of the matching application and the memory size of the thread based on each thread, and obtaining the occupied memory value of the corresponding thread. According to the method and the device, a detection target and a frequently accessed call stack are not required to be assumed, memory monitoring codes are set in a program, and the situation that all threads occupy memory values in running is counted, so that a developer can rapidly locate threads with memory leakage through abnormal situations that the threads occupy the memory values. The problems of low thread operation efficiency and poor test effect caused by frequent access to the call stack are avoided.

Description

Memory monitoring method, device, medium and electronic equipment
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to a memory monitoring method, a memory monitoring device, a memory monitoring medium and electronic equipment.
Background
Memory leakage refers to serious consequences such as slow running of a program and even system breakdown, which are caused by waste of system memory because of unreleased or unreleased heap memory dynamically allocated in the program for some reason. The memory leakage defect has the characteristics of concealment and accumulation, and is more difficult to detect than other illegal memory access errors. The memory leakage occurs because the memory block is not released, and belongs to a missing type defect rather than a miss type defect. In addition, memory leaks often do not directly produce observable false symptoms, but rather accumulate gradually, degrading overall system performance, and in extreme cases, potentially causing system crashes.
Currently, in a multithreading and multitasking environment, in the prior art, methods for monitoring memory leakage (such as an inserting method, a tested code preprocessing method and a static analysis method) all need to assume detection targets, and detection codes are set for different detection targets each time, so that debugging work is time-consuming and labor-consuming, and in addition, when running, the detection codes always need to frequently call a call stack, so that a process is slow.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The disclosure aims to provide a memory monitoring method, a memory monitoring device, a memory monitoring medium and an electronic device, which can solve at least one technical problem mentioned above. The specific scheme is as follows:
according to a specific embodiment of the present disclosure, in a first aspect, the present disclosure provides a memory monitoring method, including:
in the running program, acquiring unique identity information of a thread;
acquiring thread stack information of a corresponding thread based on the unique identity information; the thread stack information at least comprises a thread starting address and a thread memory size;
acquiring a memory information set; each information unit in the memory information set comprises an application memory size dynamically acquired by a thread in the running program for calling a called function and a calling instruction address for calling the called function in a corresponding thread, wherein the value of the calling instruction address comprises a starting address of the called function;
matching the call instruction address of each information unit in the memory information set with the thread stack information of each thread respectively, and determining the matched call instruction address of the corresponding thread;
acquiring the matching application memory size of the corresponding thread from the information unit of the memory information set based on the matching call instruction address respectively;
and respectively calculating the sum of the memory size of the matching application and the memory size of the thread based on each thread, and obtaining the occupied memory value of the corresponding thread.
According to a second aspect of the present disclosure, there is provided a memory monitoring device, including:
the unique identity information obtaining unit is used for obtaining the unique identity information of the thread in the running program;
the thread stack information acquisition unit is used for acquiring thread stack information of a corresponding thread based on the unique identity information; the thread stack information at least comprises a thread starting address and a thread memory size;
the memory information acquisition unit is used for acquiring a memory information set; each information unit in the memory information set comprises an application memory size dynamically acquired by a thread in the running program for calling a called function and a calling instruction address for calling the called function in a corresponding thread, wherein the value of the calling instruction address comprises a starting address of the called function;
a matched calling instruction address unit is determined and is used for respectively matching the calling instruction address of each information unit in the memory information set with the thread stack information of each thread and determining the matched calling instruction address of the corresponding thread;
the memory size obtaining unit is used for obtaining the memory size of the matching application of the corresponding thread from the information unit of the memory information set based on the matching call instruction address respectively;
and the memory value acquiring unit is used for respectively calculating the sum of the memory size of the matching application and the memory size of the thread based on each thread and acquiring the memory value occupied by the corresponding thread.
According to a third aspect of the disclosure, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the memory monitoring method according to any of the first aspects.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a memory monitoring method as claimed in any of the first aspects.
Compared with the prior art, the scheme of the embodiment of the disclosure has at least the following beneficial effects:
the disclosure provides a memory monitoring method, a memory monitoring device, a medium and electronic equipment. The memory monitoring method comprises the following steps: in the running program, acquiring unique identity information of a thread; acquiring thread stack information of a corresponding thread based on the unique identity information; acquiring a memory information set; matching the call instruction address of each information unit in the memory information set with the thread stack information of each thread respectively, and determining the matched call instruction address of the corresponding thread; acquiring the matching application memory size of the corresponding thread from the information unit of the memory information set based on the matching call instruction address respectively; and respectively calculating the sum of the memory size of the matching application and the memory size of the thread based on each thread, and obtaining the occupied memory value of the corresponding thread. According to the method and the device, a detection target and a frequently accessed call stack are not required to be assumed, memory monitoring codes are set in a program, and the situation that all threads occupy memory values in running is counted, so that a developer can rapidly locate threads with memory leakage through abnormal situations that the threads occupy the memory values. The problems of low thread operation efficiency and poor test effect caused by frequent access to the call stack are avoided.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale. In the drawings:
FIG. 1 illustrates a flow chart of a memory monitoring method according to an embodiment of the present disclosure;
FIG. 2 shows a block diagram of a unit of a memory monitoring device according to an embodiment of the present disclosure;
fig. 3 illustrates a schematic diagram of an electronic device connection structure according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Alternative embodiments of the present disclosure are described in detail below with reference to the drawings.
The first embodiment provided in the present disclosure is an embodiment of a memory monitoring method.
Embodiments of the present disclosure are described in detail below in conjunction with fig. 1. The embodiment of the disclosure adopts hook technology to hijack normal dynamic application memory and dynamic release memory.
Step S101, in the running program, obtaining unique identity information of the thread.
Typically, when there is a memory leak, it is necessary to manually add monitor code in the thread assuming the leak. The implementation of the method does not need to presume a detection target and a frequently accessed call stack, but sets a memory monitoring code in a program, and counts the situation that all threads occupy memory values in running.
Threads, which are terms of operating system technology, are the smallest units that an operating system can schedule for operations, which are included in a process, and are the actual units of operation in the process. One thread is a single sequential control flow in a process, and multiple threads can be parallel in a process, and each thread can execute different tasks in parallel. Each time a thread is created in the system, the thread obtains a unique identity information.
The unique identity information of the threads is acquired, namely the unique identity information of each thread in the process is acquired.
For example, pthread is an implementation of the POSIX thread standard, where all POSIX standard threads in the system are created by Pthread interfaces; pthread has a doubly linked list inside for linking all threads in the system; the following data structures are shown:
wherein tid represents unique identity information of the thread;
thus, the Pthread data interface pointer of the current thread can be obtained through the pthread_self function, and then the linked list of the pthread_internal_t data structure is traversed to obtain the unique identity information tid of all threads.
Step S102, thread stack information of the corresponding thread is obtained based on the unique identity information.
The thread stack information includes at least a thread start address and a thread memory size.
For example, thread stack information pthread_attr_t is acquired using an int pthread_gettr_np (pthread_ t t, pthread_attr_t_attr) interface; wherein t represents unique identity information of the thread;
where stack_size represents the thread start address; guard_size represents the thread memory size.
Step S103, a memory information set is acquired.
The memory information set is a mode of computer storage and data organization, and refers to a set of data elements with one or more specific relations among each other, and is often related to efficient search algorithms and index technologies so as to achieve higher operation or storage efficiency.
Each information unit in the memory information set comprises an application memory size dynamically acquired by a thread in the running program for calling a called function and a calling instruction address for calling the called function in a corresponding thread, wherein the value of the calling instruction address comprises a starting address of the called function. I.e. the information stored in each information unit in the memory information set is associated with the running program.
The application memory size refers to the memory size allocated to the called function by the system when the thread dynamically applies memory for the called function. For example, in the development of the C language, dynamic application memory often uses a malloc function, where the input parameters include application memory size and the return parameters include application memory address.
The embodiment of the disclosure further comprises the following steps before the memory information set is acquired:
step S100, the memory information set is established.
Specifically, the method comprises the following steps:
step S100-1, hijacking dynamically applies for memory based on the called function, and obtaining the size of the applied memory and the call instruction address.
In a thread, the purpose of dynamically applying memory is to apply a temporary memory space for a called function (e.g., in the C language, a malloc function is used to dynamically apply a temporary memory space for a called function), and the called function is placed in the temporary memory space, so that the thread calls the called function to perform a specific function. When the function is completed by the called function, the thread will dynamically release the temporary memory space (for example, in the C language, release the temporary memory space by using the free function) under normal conditions, so that other called functions can dynamically apply for the memory space, thereby achieving the purpose of recycling the memory. However, due to design negligence, the temporary memory space may not be released, the called function always occupies the temporary memory space, and other called functions cannot reuse the temporary memory space. This phenomenon is referred to as memory leak.
The embodiment of the disclosure adopts hook technology to hijack normal dynamic application memory so as to acquire the calling instruction address when the memory is dynamically applied. For example, the hook technology is utilized to hijack a malloc function used for dynamically applying for memory in the system; and calling a built-in_building_frame_address function of a compiler in the malloc function to acquire a Frame Pointer (FP) of a thread stack, wherein the FP value is a calling instruction address, and the calling instruction address is in a current thread stack address interval, so that a thread for calling the called function can be determined through the FP value.
Step S100-2, a new information unit is created in the memory information set, and the applied memory size and the calling instruction address are saved in the new information unit.
Optionally, the set of memory information comprises a binary tree. In computer science, a binary tree is a tree structure having at most two subtrees per node. The information elements in the memory information set are replaced by nodes of the binary tree. The node in the binary tree representing the information unit further comprises an application memory starting address for the binary tree index. That is, the nodes in the binary tree include: the method comprises the steps of applying for a memory starting address, applying for a memory size and calling an instruction address. The application memory initial address is used for indexing the binary tree so as to improve the information retrieval efficiency. When the memory is dynamically released, the temporary memory space pointed by the application memory starting address can be quickly found through the application memory starting address searching binary tree.
Optionally, the acquiring the memory information set includes the following steps:
step S103-1, when the sum of the applied memory sizes of all the information units in the memory information set is larger than a preset memory occupation threshold value, the memory information set is obtained.
The method is used for amplifying the situation that the thread occupies the memory, and effectively eliminating the interference of the thread sporadically occupying the memory on positioning memory leakage.
Step S104, the call instruction address of each information unit in the memory information set is respectively matched with the thread stack information of each thread, and the matched call instruction address of the corresponding thread is determined.
Specifically, the method comprises the following steps:
step S104-1, obtaining the thread end address of the corresponding thread based on the thread start address and the thread memory size of each thread stack information.
Step S104-2, generating a thread address threshold of each thread based on the thread start address and the thread end address of the corresponding thread.
And step S104-3, matching the values of the call instruction addresses in the memory information set with the thread address threshold value of each thread respectively.
The value of the calling instruction address comprises the starting address of the called function.
Step S104-4, when the value of the calling instruction address is within the range of the thread address threshold value of the corresponding thread, determining that the calling instruction address is the matching calling instruction address of the corresponding thread.
The thread that calls the called function can be determined if the call instruction address of the called function is within the range of the thread address threshold.
Step S105, obtaining the matching application memory size of the corresponding thread from the information unit of the memory information set based on the matching call instruction address.
Step S106, based on each thread, calculating the sum of the size of the matched application memory and the size of the thread memory, and obtaining the occupied memory value of the corresponding thread.
I.e., the total memory size value occupied by each thread is calculated separately. Under normal circumstances, a thread dynamically applies for temporary memory space for a called function in order to call the called function to perform a particular function. After the function is completed by the called function, the thread dynamically releases the temporary memory space, so that other called functions can dynamically apply for the memory space, and the purpose of recycling the memory is achieved. And deleting the corresponding information units in the memory information set. The occupied memory value of a thread should be equal to the thread size when the thread does not call any called functions. Normally, after the thread calls the function and releases, the occupied memory value of the thread is still equal to the thread size. If the called function has memory leakage, the called function is not released, so that the memory occupied by the thread is larger than the thread size.
Therefore, in order to improve the testing efficiency of the developer, the embodiment of the disclosure further provides the following steps. Optionally, the obtaining the thread stack information of the corresponding thread based on the unique identity information further includes the following steps:
step S102-1, obtaining the thread name of the corresponding thread based on the unique identity information.
For example, the unique identity information of the thread is obtained through the pid_t pthread_gettid_np (pthread_ t t) interface, and then the thread name of the thread is obtained through the read "/proc/tid/comm" node.
Further, after the memory occupied value of the corresponding thread is obtained, the method further comprises the following steps:
step S107, sorting is performed from large to small based on the occupied memory value, and an occupied memory sorting list is generated.
The occupied memory ordered list comprises a thread name and an occupied memory value of each thread.
Based on the occupied memory ordering list, at least one of the following processing modes is obtained:
the first processing mode is that the occupied memory ordering list is displayed;
and in a second processing mode, when the occupied memory value in the occupied memory sequencing list accords with a preset thread leakage threshold value, determining that the thread corresponding to the occupied memory value and having the thread name causes memory leakage.
The first processing mode enables experienced developers to quickly locate threads of memory leaks.
The processing mode can automatically screen and position the threads with the memory leakage for the developer by presetting the thread leakage threshold value.
According to the embodiment of the disclosure, the thread memory size is obtained through the unique identity information of the thread, the trace thread calls the called function to obtain the application memory size which is not released by the called function, and the occupied memory value of the thread is counted. The situation that all threads occupy memory values in running is counted without assuming detection targets and frequently accessing call stacks, and a memory monitoring code is arranged in a program, so that a developer can rapidly locate threads with memory leakage through abnormal situations that the threads occupy the memory values. The problems of low thread operation efficiency and poor test effect caused by frequent access to the call stack are avoided.
Corresponding to the first embodiment provided by the present disclosure, the present disclosure also provides a second embodiment, i.e., a memory monitoring device. Since the second embodiment is substantially similar to the first embodiment, the description is relatively simple, and the relevant portions will be referred to the corresponding descriptions of the first embodiment. The device embodiments described below are merely illustrative.
Fig. 2 illustrates an embodiment of a memory monitoring device provided by the present disclosure.
Referring to fig. 2, the present disclosure provides a memory monitoring device, including:
the unique identity information obtaining unit 201 is configured to obtain unique identity information of a thread in a running program;
an acquiring thread stack information unit 202, configured to acquire thread stack information of a corresponding thread based on the unique identity information; the thread stack information at least comprises a thread starting address and a thread memory size;
an acquiring memory information set unit 203, configured to acquire a memory information set; each information unit in the memory information set comprises an application memory size dynamically acquired by a thread in the running program for calling a called function and a calling instruction address for calling the called function in a corresponding thread, wherein the value of the calling instruction address comprises a starting address of the called function;
a unit 204 for determining a matching call instruction address, configured to match the call instruction address of each information unit in the memory information set with thread stack information of each thread, and determine a matching call instruction address of a corresponding thread;
a match application memory size obtaining unit 205, configured to obtain a match application memory size of a corresponding thread from an information unit of the memory information set based on the match call instruction address, respectively;
and the memory value acquiring unit 206 is configured to calculate, based on each thread, a sum of the memory size of the matching application and the memory size of the thread, and acquire a memory value occupied by the corresponding thread.
Optionally, in the determining that the call matches the instruction address unit 204, it includes:
a thread end address obtaining subunit, configured to obtain a thread end address of a corresponding thread based on a thread start address and a thread memory size of each thread stack information;
a thread address generating sub-unit for generating a thread address threshold of a corresponding thread based on the thread start address and the thread end address of each thread;
the matching subunit is used for respectively matching the values of the call instruction addresses in the memory information set with the thread address threshold value of each thread;
and the determining and calling instruction address matching subunit is used for determining the calling instruction address as the matching and calling instruction address of the corresponding thread when the value of the calling instruction address is within the range of the thread address threshold value of the corresponding thread.
Optionally, in the acquiring memory information set unit 203, it includes:
and the memory information set acquisition subunit is used for acquiring the memory information set when the sum of the application memory sizes of all the information units in the memory information set is larger than a preset memory occupation threshold value.
Optionally, the apparatus further includes:
the memory information set establishing unit is used for establishing a memory information set before the memory information set is acquired;
the memory information set establishing unit includes:
an information unit information subunit for hijacking the dynamic application memory based on the called function, and acquiring the size of the application memory and the calling instruction address;
and creating a new information unit subunit, configured to create a new information unit in the memory information set, and store the application memory size and the call instruction address into the new information unit.
Optionally, the set of memory information includes a binary tree; the node in the binary tree representing the information unit further comprises an application memory starting address for the binary tree index.
Optionally, in the acquiring thread stack information unit 202, further includes:
and the thread name acquisition subunit is used for acquiring the thread name of the corresponding thread based on the unique identity information.
Optionally, the apparatus further includes:
the occupied memory ordering list unit is used for ordering the occupied memory values from large to small based on the occupied memory values after the occupied memory values of the corresponding threads are acquired, and an occupied memory ordering list is generated; the occupied memory ordered list comprises a thread name and an occupied memory value of each thread.
Optionally, the apparatus further comprises at least one of the following units:
the display unit is used for displaying the occupied memory ordering list;
and determining a memory leakage thread unit, wherein the memory leakage thread unit is used for determining that a thread corresponding to the occupied memory value and having the thread name causes memory leakage when the occupied memory value in the occupied memory sequencing list accords with a preset thread leakage threshold value.
According to the embodiment of the disclosure, the thread memory size is obtained through the unique identity information of the thread, the trace thread calls the called function to obtain the application memory size which is not released by the called function, and the occupied memory value of the thread is counted. The situation that all threads occupy memory values in running is counted without assuming detection targets and frequently accessing call stacks, and a memory monitoring code is arranged in a program, so that a developer can rapidly locate threads with memory leakage through abnormal situations that the threads occupy the memory values. The problems of low thread operation efficiency and poor test effect caused by frequent access to the call stack are avoided.
An embodiment of the present disclosure provides a third embodiment, namely an electronic device, where the electronic device is used in a memory monitoring method, and the electronic device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the one processor to enable the at least one processor to perform the memory monitoring method of the first embodiment.
The present disclosure provides a fourth embodiment, namely a computer storage medium storing computer executable instructions that can perform the memory monitoring method described in the first embodiment.
Referring now to fig. 3, a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 3 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 3, the electronic device may include a processing means (e.g., a central processor, a graphics processor, etc.) 301 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data required for the operation of the electronic device are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
In general, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 308 including, for example, magnetic tape, hard disk, etc.; and communication means 309. The communication means 309 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 3 shows an electronic device having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via a communication device 309, or installed from a storage device 308, or installed from a ROM 302. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing means 301.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (9)

1. A memory monitoring method, comprising:
in the running program, acquiring unique identity information of a thread;
acquiring thread stack information of a corresponding thread based on the unique identity information; the thread stack information at least comprises a thread starting address and a thread memory size;
acquiring a memory information set; each information unit in the memory information set comprises an application memory size dynamically acquired by a thread in the running program for calling a called function and a calling instruction address for calling the called function in a corresponding thread, wherein the value of the calling instruction address comprises a starting address of the called function;
matching the call instruction address of each information unit in the memory information set with the thread stack information of each thread respectively, and determining the matched call instruction address of the corresponding thread;
acquiring the matching application memory size of the corresponding thread from the information unit of the memory information set based on the matching call instruction address respectively;
calculating the sum of the memory size of the matching application and the memory size of the threads based on each thread respectively, and obtaining the occupied memory value of the corresponding thread;
before the memory information set is acquired, the method further includes:
hijacking dynamically applies for memory based on the regulated function, and acquiring the size of the applied memory and the calling instruction address;
creating a new information unit in the memory information set, and storing the applied memory size and the call instruction address into the new information unit.
2. The memory monitoring method according to claim 1, wherein the matching the call instruction address of each information unit in the memory information set with the thread stack information of each thread, respectively, and determining the matched call instruction address of the corresponding thread includes:
acquiring a thread ending address of a corresponding thread based on the thread starting address and the thread memory size of each thread stack information;
generating a thread address threshold for each thread based on the thread start address and the thread end address for the corresponding thread;
matching the value of the call instruction address in the memory information set with the thread address threshold of each thread respectively;
and when the value of the calling instruction address is in the range of the thread address threshold value of the corresponding thread, determining that the calling instruction address is the matching calling instruction address of the corresponding thread.
3. The memory monitoring method according to claim 1, wherein the acquiring the set of memory information includes:
and when the sum of the applied memory sizes of all the information units in the memory information set is larger than a preset memory occupation threshold value, acquiring the memory information set.
4. The memory monitoring method of claim 1, wherein the set of memory information comprises a binary tree; the node in the binary tree representing the information unit further comprises an application memory starting address for the binary tree index.
5. The memory monitoring method according to claim 1, wherein the obtaining the thread stack information of the corresponding thread based on the unique identity information further comprises:
and acquiring the thread name of the corresponding thread based on the unique identity information.
6. The memory monitoring method according to claim 5, further comprising, after the acquiring the occupied memory value of the corresponding thread:
sorting based on the occupied memory value from big to small to generate an occupied memory sorting list; the occupied memory ordering list comprises a thread name and an occupied memory value of each thread;
based on the occupied memory ordering list, at least one of the following processing modes is obtained:
displaying the occupied memory sorting list;
and when the occupied memory value in the occupied memory sequencing list accords with a preset thread leakage threshold value, determining that the thread corresponding to the occupied memory value and having the thread name causes memory leakage.
7. A memory monitoring device, comprising:
the unique identity information obtaining unit is used for obtaining the unique identity information of the thread in the running program;
the thread stack information acquisition unit is used for acquiring thread stack information of a corresponding thread based on the unique identity information; the thread stack information at least comprises a thread starting address and a thread memory size;
the memory information acquisition unit is used for acquiring a memory information set; each information unit in the memory information set comprises an application memory size dynamically acquired by a thread in the running program for calling a called function and a calling instruction address for calling the called function in a corresponding thread, wherein the value of the calling instruction address comprises a starting address of the called function;
a matched calling instruction address unit is determined and is used for respectively matching the calling instruction address of each information unit in the memory information set with the thread stack information of each thread and determining the matched calling instruction address of the corresponding thread;
the memory size obtaining unit is used for obtaining the memory size of the matching application of the corresponding thread from the information unit of the memory information set based on the matching call instruction address respectively;
the memory value acquiring unit is used for respectively calculating the sum of the memory size of the matching application and the memory size of the thread based on each thread and acquiring the memory value occupied by the corresponding thread;
wherein, before the memory information set unit is acquired, the apparatus further comprises:
an information unit information subunit for hijacking the dynamic application memory based on the called function, and acquiring the size of the application memory and the calling instruction address;
and creating a new information unit subunit, configured to create a new information unit in the memory information set, and store the application memory size and the call instruction address into the new information unit.
8. A computer readable storage medium having stored thereon a computer program, wherein the program when executed by a processor implements the memory monitoring method according to any of claims 1 to 6.
9. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the memory monitoring method of any of claims 1 to 6.
CN202010896417.6A 2020-08-31 2020-08-31 Memory monitoring method, device, medium and electronic equipment Active CN112084024B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010896417.6A CN112084024B (en) 2020-08-31 2020-08-31 Memory monitoring method, device, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010896417.6A CN112084024B (en) 2020-08-31 2020-08-31 Memory monitoring method, device, medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112084024A CN112084024A (en) 2020-12-15
CN112084024B true CN112084024B (en) 2024-02-02

Family

ID=73731269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010896417.6A Active CN112084024B (en) 2020-08-31 2020-08-31 Memory monitoring method, device, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112084024B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114691347A (en) * 2020-12-29 2022-07-01 深圳Tcl新技术有限公司 Thread memory obtaining method and device, intelligent terminal and storage medium
CN112905381B (en) * 2021-03-22 2022-12-06 北京字节跳动网络技术有限公司 Method, device, equipment and medium for acquiring stack frame information in call stack

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945163A (en) * 2012-10-29 2013-02-27 桂林电子科技大学 Signal-slot structure for embedded system
CN107562586A (en) * 2017-08-16 2018-01-09 努比亚技术有限公司 A kind of internal-memory detection method, equipment and computer-readable storage medium
WO2019001334A1 (en) * 2017-06-27 2019-01-03 阿里巴巴集团控股有限公司 Stack overflow processing method and device
CN110457150A (en) * 2019-07-10 2019-11-15 锐捷网络股份有限公司 A kind of memory failure detection method and device
CN110908865A (en) * 2019-11-15 2020-03-24 珠海豹趣科技有限公司 Memory leakage monitoring method and device and electronic equipment
CN111090536A (en) * 2019-11-19 2020-05-01 北京字节跳动网络技术有限公司 Method, device, medium and electronic equipment for acquiring memory leakage information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945163A (en) * 2012-10-29 2013-02-27 桂林电子科技大学 Signal-slot structure for embedded system
WO2019001334A1 (en) * 2017-06-27 2019-01-03 阿里巴巴集团控股有限公司 Stack overflow processing method and device
CN107562586A (en) * 2017-08-16 2018-01-09 努比亚技术有限公司 A kind of internal-memory detection method, equipment and computer-readable storage medium
CN110457150A (en) * 2019-07-10 2019-11-15 锐捷网络股份有限公司 A kind of memory failure detection method and device
CN110908865A (en) * 2019-11-15 2020-03-24 珠海豹趣科技有限公司 Memory leakage monitoring method and device and electronic equipment
CN111090536A (en) * 2019-11-19 2020-05-01 北京字节跳动网络技术有限公司 Method, device, medium and electronic equipment for acquiring memory leakage information

Also Published As

Publication number Publication date
CN112084024A (en) 2020-12-15

Similar Documents

Publication Publication Date Title
CN111090536B (en) Method, device, medium and electronic equipment for acquiring memory leakage information
CN113835992B (en) Memory leakage processing method and device, electronic equipment and computer storage medium
CN111813666B (en) Memory leakage positioning method, device, medium and electronic equipment
CN111679990B (en) Test data generation method and device, readable medium and electronic equipment
CN112084024B (en) Memory monitoring method, device, medium and electronic equipment
CN111625422B (en) Thread monitoring method, thread monitoring device, electronic equipment and computer readable storage medium
CN112099982A (en) Collapse information positioning method, device, medium and electronic equipment
CN114296986B (en) Memory leakage positioning method, device, medium and electronic equipment
CN111338831B (en) Processing method and device for multithreading operation performance, medium and electronic equipment
CN110888773B (en) Method, device, medium and electronic equipment for acquiring thread identification
CN112527302B (en) Error detection method and device, terminal and storage medium
CN112084086B (en) Information collection method, device, medium and electronic equipment
CN110764995B (en) Method, device, medium and electronic equipment for detecting file access abnormality
CN111984496B (en) Method, device, medium and electronic equipment for monitoring stack memory leakage
CN111538717B (en) Data processing method, device, electronic equipment and computer readable medium
CN112379967B (en) Simulator detection method, device, equipment and medium
CN113176937A (en) Task processing method and device and electronic equipment
CN111949528A (en) Memory leak detection method and device, electronic equipment and storage medium
CN112650647B (en) Information acquisition method, device, equipment and medium
CN111782410B (en) Lock jam monitoring method and device, electronic equipment and computer readable medium
CN110489341B (en) Test method and device, storage medium and electronic equipment
CN114398233B (en) Load abnormality detection method and device, server and storage medium
CN111209042B (en) Method, device, medium and electronic equipment for establishing function stack
CN113849394A (en) Memory leak detection method and device, electronic equipment and computer storage medium
CN116932375A (en) Method, device, equipment and medium for detecting null pointer problem

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant