CN108062252B - Information interaction method, object management method, device and system - Google Patents

Information interaction method, object management method, device and system Download PDF

Info

Publication number
CN108062252B
CN108062252B CN201610983813.6A CN201610983813A CN108062252B CN 108062252 B CN108062252 B CN 108062252B CN 201610983813 A CN201610983813 A CN 201610983813A CN 108062252 B CN108062252 B CN 108062252B
Authority
CN
China
Prior art keywords
thread
shared memory
memory area
memory
creating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610983813.6A
Other languages
Chinese (zh)
Other versions
CN108062252A (en
Inventor
叶敬福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Banma Zhixing Network Hongkong Co Ltd
Original Assignee
Banma Zhixing Network Hongkong Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Banma Zhixing Network Hongkong Co Ltd filed Critical Banma Zhixing Network Hongkong Co Ltd
Priority to CN201610983813.6A priority Critical patent/CN108062252B/en
Publication of CN108062252A publication Critical patent/CN108062252A/en
Application granted granted Critical
Publication of CN108062252B publication Critical patent/CN108062252B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/524Deadlock detection or avoidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Abstract

The application discloses an information interaction method, an object management device and an object management system. In the application, a first thread writes data into a shared memory area, wherein the shared memory area is different from a memory area managed by a memory management object in a dynamic language engine and can be shared and used by a plurality of threads, and the memory management object is used for managing the memory area used by the threads; and the first thread sends the position information of the written data in the shared memory area to the second thread. Therefore, different thread objects can realize information interaction based on the shared memory object, thereby reducing the complexity of cross-thread communication and improving the communication efficiency.

Description

Information interaction method, object management method, device and system
Technical Field
The present application relates to the field of computer technologies, and in particular, to an information interaction method, an object management device, and an object management system.
Background
The cloud operating system provides basic capability of the operating system for user application based on node. Js is a Web application framework of a JavaScript engine built on Chrome. Js self-contained runtime environment, can explain and carry out on the basis of JavaScript code. The runtime allows JavaScript code to be executed on any machine other than a browser. Js also provides various rich JavaScript module libraries, and simplifies the research and development of extending Web application programs by using the node. In brief, node.js provides a runtime environment and a JavaScript code base.
The JavaScript engine defaults node.js application to be based on a single-threaded model, and although node.js can improve response efficiency by using an asynchronous call and a non-blocking input/output (I/O) event model, the single-threaded model seriously affects application operation efficiency for computation-intensive applications.
Therefore, a node.js provides a multi-process library (sub-process module), although the application parallel capability is improved to a certain extent, cross-thread communication is needed among multiple threads, the efficiency is low, and the requirement of a user on the multiple threads cannot be met.
Disclosure of Invention
The embodiment of the application provides an information interaction method, an object management device and an object management system, which are used for realizing multithreading efficient communication aiming at application based on an object-oriented programming language.
The information interaction method provided by the embodiment of the application comprises the following steps:
writing data into the shared memory area by the first thread; the shared memory area is different from the memory area managed by the memory management object in the dynamic language engine and can be shared and used by a plurality of threads, and the memory management object is used for managing the memory area used by the threads;
and the first thread sends the position information of the written data in the shared memory area to a second thread.
Optionally, the sending, by the first thread, the location information of the written data in the shared memory area to the second thread includes: the first thread sends the position information of the written data in the shared memory area to a second thread by executing a message transmission method defined in the first thread object; wherein the first thread takes the following parameters as input parameters of the message passing method: and the identifier of the second thread and the position information of the data written by the first thread in the shared memory area.
Optionally, before the first thread writes data into the shared memory area, the method further includes:
the first thread acquires the write permission of a sub-area with a corresponding size in the shared memory area according to the size of data to be written;
after the first thread writes data into the shared memory area, the method further includes:
and the first thread relieves the write permission of the sub-area.
The obtaining, by the first thread, the write permission to the sub-area of the corresponding size in the shared memory area according to the size of the data to be written includes: the first thread adds a write lock to a sub-area in the shared memory area, wherein the sub-area corresponds to the size of the data to be written, by executing a write lock method in the memory management object; the first thread takes the size of data to be written as an input parameter of the write lock method; the first thread removes write permission to the sub-region, including: the first thread releases the added write lock by executing an unlocking method in the memory management object; and the first thread takes a lock object returned by the write lock adding method as an input parameter of the unlocking method, and the lock object is used for indicating a memory area added with a write lock.
An information interaction method provided by another embodiment of the present application includes:
the second thread receives the position information of the data written by the first thread in the shared memory area, wherein the position information is sent by the first thread; the memory area is different from the memory area managed by the memory management object in the dynamic language engine and can be shared and used by a plurality of threads, and the memory management object is used for managing the memory area used by the threads;
and the second thread reads the data written by the first thread from the shared memory area according to the position information.
Optionally, the location information includes: the starting position of the data written by the first thread in the shared memory area and the size of the memory area occupied by the written data.
Optionally, the receiving, by the second thread, the location information, in the shared memory area, of the data, which is sent by the first thread and written in the shared memory by the first thread, includes:
and the second thread receives a message sent by the first thread, wherein the message carries the position information of the data written by the first thread in the shared memory area.
Optionally, before the second thread reads the data written by the first thread from the shared memory area according to the location information, the method further includes:
the second thread acquires the read permission of the corresponding sub-area in the shared memory area according to the position information;
after the second thread reads the data written by the first thread from the shared memory area, the method further includes:
the second thread relieves read permission of a plurality of the sub-regions.
Optionally, the obtaining, by the second thread according to the location information, a read permission of a sub-area of a corresponding size in the shared memory area includes:
the second thread adds a read lock to a sub-region with a corresponding size in the shared memory region by executing a read lock adding method in the memory management object; the second thread takes the position information as an input parameter of the reading locking method;
the second thread removing the read permission of the sub-area comprises:
the second thread releases the added read lock by executing an unlocking method in the memory management object; and the second thread takes a lock object returned by the reading and writing lock method as an input parameter of the unlocking method, and the lock object is used for indicating a memory area added with reading and writing.
The object management method provided by the embodiment of the application comprises the following steps:
creating a shared memory object, wherein a memory area corresponding to the shared memory object is different from a memory area managed by a memory management object in a dynamic language engine, and the memory management object is used for managing the memory area used by a thread;
and creating a thread object, wherein the thread object is associated with the shared memory object, so that the thread corresponding to the thread object shares the memory area corresponding to the shared memory object.
Optionally, the thread object includes a memory attribute, and the shared memory object includes a shared memory object identifier attribute; associating the thread object with the shared memory object by: and setting the value of the memory attribute in the thread object to be the same as the value of the shared memory object identification attribute in the shared memory object.
Optionally, the shared memory object includes the following attributes: the shared memory object identifier is used for uniquely identifying the shared memory object;
the shared memory object comprises the following methods: the data writing method is used for writing data into a memory area corresponding to the shared memory object; and the data reading method is used for reading data from the memory area corresponding to the shared memory object.
Further, the shared memory object further includes the following method:
the write lock method is used for adding a write lock to a memory area corresponding to the shared memory object or a designated sub-area in the memory area so as to block threads which do not acquire write permission from writing data into the memory area to be added with the write lock or reading data from the memory area to be added with the write lock;
and the unlocking method is used for releasing the added write lock.
The input parameters of the write lock method comprise: the starting position of the designated sub-region in the memory region corresponding to the shared memory object, and the size of the designated sub-region.
Optionally, the shared memory object further includes the following method:
a read lock adding method, configured to add a read lock to a memory region corresponding to the shared memory object or to a designated sub-region in the memory region, so as to block other threads except for invoking the read lock adding method from writing data in the memory region to which the read lock is added;
and the unlocking method is used for releasing the added read lock.
The input parameters of the locking method comprise: the starting position of the designated sub-region in the memory region corresponding to the shared memory object, and the size of the designated sub-region.
Optionally, the following attributes are included in the thread object:
the thread object identifier is used for uniquely identifying the thread object;
the thread object comprises the following methods:
a message passing method for passing messages between thread objects.
Optionally, the method further comprises: loading a code base, wherein the code base comprises: the method comprises the steps that codes used for creating a shared memory object and a corresponding first interface, and codes used for creating a thread object and a corresponding second interface;
creating a shared memory object, comprising: calling the first interface, and creating to obtain a shared memory object;
creating a thread object, comprising: and calling the second interface to create and obtain a thread object.
Optionally, when the first interface is called, the size of an initially allocated memory region and the size of a reserved memory region are used as input parameters, so that the size of a memory region corresponding to the created shared memory object is the same as the size of the initially allocated memory region, and the size of a memory region reserved for the shared memory object is the same as the size of the reserved memory region; or, when the first interface is called, the size of the initially allocated memory area is used as an input parameter, so that the size of the memory area corresponding to the created shared memory object is the same as the size of the initially allocated memory area; or, when the first interface is called, the size of the reserved memory area is used as an input parameter, so that the size of the memory area corresponding to the created shared memory object is the same as the default set memory area size, and the size of the memory area reserved for the shared memory object is the same as the reserved memory area size.
Optionally, the number of the created shared memory objects is one or more; and if the number of the created shared memory objects is multiple, grouping the created thread objects, wherein the thread objects in each group are associated with one shared memory object, and the shared memory objects associated with the thread objects in different groups are different from each other.
The information interaction device provided by the embodiment of the application is a device corresponding to a first thread object, and the device comprises:
the write operation module is used for writing data into the shared memory area; the shared memory area is different from the memory area managed by the memory management object in the dynamic language engine and can be shared and used by a plurality of threads, and the memory management object is used for managing the memory area used by the threads;
and the interaction module is used for sending the position information of the written data in the shared memory area to the second thread.
An information interaction apparatus provided in another embodiment of the present application is an apparatus corresponding to a second thread object, and the apparatus includes:
the interaction module is used for receiving the position information of the data written by the first thread in the shared memory area, wherein the position information is sent by the first thread; the memory area is different from the memory area managed by the memory management object in the dynamic language engine and can be shared and used by a plurality of threads, and the memory management object is used for managing the memory area used by the threads;
and the read operation module is used for reading the data written by the first thread from the shared memory area according to the position information.
The object management device provided by the embodiment of the application comprises:
the shared memory object management module is used for creating a shared memory object, a memory area corresponding to the shared memory object is different from a memory area managed by a memory management object in the dynamic language engine, and the memory management object is used for managing the memory area used by a thread;
and the thread object management module is used for creating a thread object, and the thread object is associated with the shared memory object so that the thread corresponding to the thread object shares the memory area corresponding to the shared memory object.
The cloud operating system provided by the embodiment of the application comprises: an application layer, a runtime framework layer, and a system kernel;
the runtime framework layer comprises a dynamic language engine and a code base, wherein the code base comprises: code for creating a shared memory object, and code for creating a thread object;
the shared memory object created by the code for creating the shared memory object has a corresponding memory area different from a memory area managed by a memory management object in a dynamic language engine, wherein the memory management object is used for managing the memory area used by a thread;
and associating the thread object created by the code for creating the thread object with the shared memory object so that the thread corresponding to the thread object shares the memory area corresponding to the shared memory object.
The computer device provided by the embodiment of the application comprises: a processor, a memory;
a memory for storing computer program instructions;
a processor, coupled to the memory, for reading computer program instructions stored by the memory and, in response, performing the following:
writing data into the shared memory area by the first thread; the shared memory area is different from the memory area managed by the memory management object in the dynamic language engine and can be shared and used by a plurality of threads, and the memory management object is used for managing the memory area used by the threads;
and the first thread sends the position information of the written data in the shared memory area to a second thread.
In the above embodiment of the present application, because the memory area corresponding to the shared memory object is different from the memory area managed by the memory management object in the dynamic language engine and can be shared by multiple threads, that is, multiple threads can implement information interaction based on the memory area corresponding to the shared memory object, compared with the problem that the communication complexity between threads is high and the efficiency is low due to creating multiple threads based on the memory management object in the dynamic language engine, by adopting the above embodiment of the present application, the memory area shared by the multi-thread object is created outside the memory area managed by the memory management object in the dynamic language engine, so that different threads implement information interaction based on the shared memory object, thereby reducing the complexity of cross-thread communication and improving the communication efficiency.
Drawings
Fig. 1 is a schematic diagram illustrating a relationship between a Multi-thread.
Fig. 2 is a schematic diagram illustrating a process of creating a shared memory object and a thread object according to an embodiment of the present application;
fig. 3 is a schematic view illustrating an inter-thread information interaction flow according to an embodiment of the present disclosure;
FIG. 4 is a second illustrative flowchart of information interaction between threads according to an embodiment of the present application;
fig. 5 is a schematic flow chart of data writing of a downloadThread object according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of reading data of a playerThread object according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an object management apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an information interaction device according to an embodiment of the present application;
FIG. 9 is a schematic structural diagram of an information interaction apparatus according to another embodiment of the present application;
FIG. 10 is a schematic structural diagram of a computer device provided in an embodiment of the present application;
fig. 11 is a schematic diagram of an operating system architecture according to an embodiment of the present application.
Detailed Description
The cloud operating system provides basic capability of the operating system for user application based on node.
Therefore, the embodiment of the application provides a multithreading communication model and a memory sharing technology among multiple threads, and the application based on the multithreading communication and the memory sharing technology can improve the operation efficiency and the user response speed and can be suitable for calculation intensive application.
The embodiments of the present application can be applied to an application program written based on an object-oriented programming language, for example, the object-oriented programming language may include JavaScript, but other types of programming languages may also be included, which is not listed here.
The application is implemented in an application framework but outside a dynamic language engine (such as a JavaScript engine), and provides multi-threaded communication and memory sharing techniques. The application is implemented in a node.js framework but outside a JavaScript engine, and provides a multi-thread communication and memory sharing technology. In brief, in the embodiment of the present application, a native thread of a system is created in a node.js, and a memory area that can be shared by multiple threads is created outside an iso-port object of a JavaScript engine, so that a memory management mechanism of the JavaScript engine is separated, and a more flexible and efficient memory management capability is provided. The dynamic language is a language category in the computer programming language, is a language which can dynamically change types and structures at runtime, and functions and attributes can be added, modified and deleted at runtime. For example, JavaScript, Python, Ruby, etc. belong to dynamic languages. The dynamic language can be operated without compiling, and the support of an operation environment is needed during operation, wherein the operation environment is called a runtime environment and comprises all elements required by the operation of the dynamic language, such as a Java virtual machine, a JavaScript engine and the like. The isolate object is an object used for memory management in the JavaScript engine, for example, the isolate may represent an independent instance of the JavaScript engine, may manage a memory state of a thread, and is responsible for creating and recycling each JavaScript object in a running process of the thread.
The application programs referred to in the embodiments of the present application may be various types of application programs, and may be service components, for example. Taking YunOS (a cloud operating system) as an example, the application program may be Page in YunOS. Page is an abstraction of local and remote services, i.e. the basic unit of a service, which can provide various services by encapsulating data and methods. A service scenario may include multiple pages. For example, a Page may be a User Interface (UI), a photo, or a background service, such as account authentication. The Page in the running state is called a Page instance and is a running carrier of a local service or a remote service. Each Page can be uniquely identified in YunOS.
The embodiment of the application can be applied to a client side and a server side. Taking the application to the client as an example, the method can be applied to a mobile terminal or a Personal Computer (PC) and other devices, where the mobile terminal can be a mobile phone, a Personal Digital Assistant (PDA), a vehicle-mounted terminal or an intelligent wearable device.
In order to more clearly understand the embodiments of the present application, some technical terms related to the object-oriented programming technology related to the embodiments of the present application are first briefly described.
-the subject: properties and methods may be defined in the object, and the properties (property) and methods (method) may be encapsulated in the object. In memory, these objects are, in essence, blocks of memory in which data and executable methods are stored.
-a thread object: is an instance object of the thread class. Thread objects encapsulate some information about the thread (e.g., including the methods that the thread performs), and methods in one thread object may be run by other threads. Taking JavaScript as an example, a Thread object may be obtained by inheriting a Thread class or by implementing a runnable interface, such as deriving a new class from Thread, adding properties and methods therein, and overwriting run () methods, i.e., a new Thread object that completes a derived class may be created. The run () method includes the code to be executed by the thread.
-a thread: a thread is the execution path of a method defined in an object, i.e., a thread is a one-time execution of code in which a thread can execute the method defined by the thread object on its behalf as well as methods in other objects. Taking JavaScript as an example, when a thread object is created by inheriting a thread dispatch, a start () method is executed, thereby starting a thread.
In the embodiment of the present application, two objects are defined:
(1) shared memory objects, one shared memory object may correspond to one memory region.
The shared memory object may include the following attributes:
-a shared memory object identifier for uniquely identifying the shared memory object.
The shared memory object may further include the following method:
-a data writing method for writing data to a memory region corresponding to the shared memory object;
-a read data method for reading data from a memory region corresponding to the shared memory object.
Further, the shared memory object may further include the following method:
the write lock method is used for applying a write lock to a memory area corresponding to the shared memory object or a designated sub-area in the memory area so as to block a thread which does not apply for the write permission from writing data into the memory area to which the write lock is applied or reading data from the memory area to which the write lock is applied. Optionally, the input parameters of the write lock method may include: the starting position of the designated sub-region in the memory region corresponding to the shared memory object, and the size of the designated sub-region.
The read lock adding method is used for adding a read lock to a memory area corresponding to the shared memory object or a designated sub-area in the memory area so as to block any thread from writing data into the memory area with the read lock. Optionally, the input parameters of the locking method may include: the starting position of the designated sub-region in the memory region corresponding to the shared memory object, and the size of the designated sub-region.
-unlocking means for releasing the applied write lock or read lock.
(2) Thread object
The following attributes may be included in the thread object:
-a thread object identification for uniquely identifying the thread object.
The following methods can also be included in the thread object:
-a messaging method for transferring messages between thread objects. Optionally, the message passing method may include the following input parameters: an identification of the target thread, and the content of the message passed to the target thread.
Further, the shared memory object and the thread object may also include other attributes and methods, and only some of the attributes and methods related to the embodiments of the present application are listed here.
The shared memory object and the thread object may be created by calling corresponding interfaces, for example, the interface for creating the shared memory object may be called to create the shared memory object, and the interface for creating the thread object may be called to create the thread object. After the shared memory object and the thread object are created, the thread object may be started, thereby starting the execution process of the thread.
Multiple thread objects can be associated with the same shared memory object, so that the shared memory can be shared by multiple threads. Specifically, a thread object may include a memory object identifier attribute, and a value of the attribute is set as a value of a shared memory object identifier in a shared memory object, so as to associate the thread object with the shared memory object, so that a thread started based on the thread object may access a memory area corresponding to the shared memory object on one hand, and may also execute a method included in the shared memory object on the other hand.
The embodiments of the present application are described in detail below with reference to the drawings by taking JavaScript as an example.
Js, the code base includes JavaScript codes for implementing the object management and the multi-thread communication function provided by the embodiment of the present application. This new JavaScript code library may be encapsulated as a code module (called an add-on module). The code module (add-on module) is one of the components of the node.js, and provides various functions (or methods), and provides Application Programming interfaces (APIs for short) for the functions, so that the corresponding functions can be implemented by calling the APIs to execute the corresponding functions. These functions may be implemented by JavaScript code or C + + code.
For convenience of description, a JavaScript code library for implementing the object management and the Multi-thread communication function provided in the embodiment of the present application is encapsulated as a Multi-thread code module (addon), and may be compatibly applied to a node. Optionally, the multi-thread code module may be loaded at node. js initialization, or may be loaded as needed at the application program running stage.
An addon provides an interface in an API manner to implement management functions for thread objects and shared memory objects, as well as inter-thread communication functions based on shared memory objects.
For example, a Multi-threaded code module (Multi-thread. addon) may provide the following interfaces:
an interface for creating thread objects, which for the sake of description will be referred to below as thread object creation API;
an interface for creating shared memory objects, which for the sake of description will be referred to hereinafter as a shared memory object creation API.
By calling the thread object creation API, a thread object can be created based on code provided by a Multi-threaded code module (addon) that implements the thread object creation function.
Further, when calling the thread object creation API, the following input parameters may be provided: initial parameters passed to the thread object. Accordingly, when creating a thread object, the created thread object may be set according to the initial parameters.
The following function statement exemplarily shows the function definition for creating a thread object:
function createThread(job,args)
where createThread is the name of a function, executing the function creates a thread object. Job and args are input parameters, respectively. jobis a JavaScript function, which is the main body of the newly created thread, and args is the initial parameter transmitted from the main thread object to the sub-thread object, where the main thread object is the thread object executing the calling process, and the sub-thread object is the thread object created by the main thread object. args can be the initial value of a variable or the body of a message, and can be defined in JSON format.
The attribute and method contained in the thread object mainly comprise:
token is the property of a thread object, the property value being an integer value uniquely identifying the thread object, with which Token communications between different thread objects can find a specified thread object.
-sendMessage (token, message): a method in a thread object for enabling one thread to send a message to another thread. The message body may be text content defined in the JSON format. The method is in an asynchronous calling mode, and the caller thread of the method returns immediately after sending the message, so that the caller thread is not blocked.
By calling the shared memory object creation API, a shared memory object can be created based on a code provided by Multi-thread. The memory area corresponding to the shared memory object is different from the memory area managed by the isolate object in the JavaScript engine, and can be shared and used by a plurality of threads.
When the shared memory object creation API is called, the following input parameters may be provided: the size of the memory area is initially allocated, and/or the size of the memory area is reserved. Accordingly, when creating the shared memory object, the initial size of the memory region corresponding to the shared memory object may be set according to the size of the initially allocated memory region, or the memory region of the corresponding size may be reserved for the shared memory object according to the size of the reserved memory region.
By setting the initial size of the shared memory area and reserving the shared memory area, the memory area can be allocated as required. In specific implementation, when creating the shared memory object, a relatively small memory area may be allocated to the shared memory object (allocated according to a parameter "initially allocated memory area size"), and then, if more memory resources are needed in the running process of the application program, more memory areas may be applied to be allocated on the basis of the memory area initially allocated to the shared memory object until the size of the allocated memory area reaches the limit of the "reserved memory area size". Thus, the use efficiency of the memory resources can be improved.
The attribute and method included in the shared memory object mainly comprise:
-token: and the attribute value of the shared memory object is an integer value and uniquely identifies the shared memory object. The thread object may use the token of the shared memory object to query the attribute (such as read-write status or locking status) of the corresponding shared memory object.
-lockRD (start, size): a method for adding a read lock in a shared memory object. The method can realize the addition of the read lock to the memory area corresponding to the shared memory object or the appointed sub-area in the memory area. start and size are input parameters, respectively, where start represents the start position of the locked sub-region, and size represents the size of the locked region. The lockRD () method may add a read lock to the shared memory region indicating that the memory region is read-only and that any attempted write to the memory region will fail. The lockRD () method may return a lock object, which is a local object.
-lockRW (start, size): a write lock method in a shared memory object. The method can realize the write lock on the memory area corresponding to the shared memory object or the appointed sub-area in the memory area. start and size are input parameters, respectively. Where start denotes the start position of the locked sub-area and size denotes the size of the locked area. The lockRW () method may place a write lock on a shared memory region to indicate that the memory region is read or written, such that only one thread is allowed to read or write to it at any time, and other threads attempting to read or write will be blocked. The lockRW () method may return a lock object, which is a local object.
Unlock (lock): an unlocking method in a shared memory object. The method can unlock the shared memory area. After the data read-write operation is completed on the shared memory region, the unlock () method can be called to unlock the corresponding memory region so as to remove the access restriction on other threads. Wherein, lock is an input parameter of the unlock () method, and is a lock object returned by the lock RD () method or the lock RW () method.
-read (lock, data): and a data reading method in the shared memory object. The method can realize reading the data stored in the shared memory area. lock is an input parameter and lock is a lock object returned by the lock RD () method. The data is an output variable, and the read data can be assigned to the variable to be returned.
-write (lock, data): a method for writing data in a shared memory object. The method can realize the writing of data into the shared memory area. lock and data are input parameters respectively. The lock is a lock object returned by the lock WR () method, and the data is data to be written into the shared memory area.
Fig. 1 exemplarily shows a relationship among a Multi-threaded code module (Multi-thread. add), a thread object, and a shared memory object: calling the creatThread () method in the Multi-Thread code module (Multi-Thread. add) creates a Thread object (Thread), which may contain the token attribute and the sendMessage () method. Calling the createsholdbuffer () method in the Multi-threaded code module (Multi-thread. addon) creates a shared memory object (SharedBuffer) that may contain the token attribute as well as the lockRD () method, lockRW () method, unlock () method, read () method, write () method. Of course, the attributes and methods in the thread object and the shared memory object are only exemplary listed, and in practical applications, other attributes and methods may be included according to needs, and are not listed here.
The following describes a process for creating a shared memory object and a thread object for a music playing application, taking the application as an example.
Music playing is usually designed to be an online playing mode, namely music data is stored at a server side, a user selects a certain song at a client side and then adopts a mode of downloading and playing simultaneously, downloading operation and playing operation are carried out in parallel, and high-efficiency playing experience is realized. Thus, in implementation, the application program creates at least two threads respectively responsible for downloading and playing music.
Referring to fig. 2, a process for creating a shared memory object and a thread object according to the embodiment of the present application is shown, where the process may include the following steps:
step 201: a Multi-threaded code module (Multi-thread).
After the music playing application is started, the main thread of the application loads a Multi-thread code module (addon). This step is optional, for example, a Multi-thread code module (Multi-thread. add) may be loaded at the time of node.js initialization, so that after the music playing application is started, the Multi-thread code module (Multi-thread. add) does not need to be loaded.
Step 202: creating a shared memory object for an application based on a Multi-thread.
In this step, the main thread of the music playing application program creates a shared memory object by calling a shared memory object creation method in a Multi-thread code module (addon). The creation process of the shared memory object can be referred to the description of the foregoing embodiment.
In this step, one shared memory object may be created, or a plurality of shared memory objects may be created.
Step 203: an object of a download thread is created for an application based on a Multi-thread.
In this step, the main thread of the music playing application creates a download thread object by calling a thread object creation method in a Multi-thread code module (addon). The creation process of the thread object can be referred to the description of the foregoing embodiment.
Step 204: creating a play thread object for the application based on a Multi-thread.
In this step, the main thread of the music playing application creates a playing thread object by calling a thread object creation method in a Multi-thread code module (addon). The creation process of the thread object can be referred to the description of the foregoing embodiment.
After the download thread object is created, executing a start () method, namely starting a download thread to start running, and the download thread can execute the method contained in the download thread object; after the creation of the play thread object is completed, a start () method is executed, that is, the play thread is started to start running, and the play thread can execute the method contained in the play thread object.
It should be noted that, the execution sequence of each step in the above flow is only an example, and the execution sequence of each step in the embodiment of the present application is not limited, for example, the sequence of step 203 and step 204 may be interchanged.
It should be further noted that the above flow is described only by taking a music playing application as an example, the application needs to create 2 thread objects for music downloading and music playing, and for other applications, a greater number of thread objects can be created as needed.
If the number of the shared memory objects is multiple, the thread objects can be grouped, the thread objects in different groups are associated with different shared memory objects, and all the thread objects in one group are associated with the same shared memory object. In this way, the threads corresponding to the thread objects in each group can share and use the same memory area, and data access isolation between the groups can be realized.
Further, the Multi-threaded code module (Multi-thread) may also provide other object management functions, such as providing object query functions.
For example, one or a combination of the following attributes of the corresponding shared memory object may be queried according to the identifier of the shared memory object:
-the size of the memory region corresponding to the shared memory object.
The read/write status of a given sub-region in the memory region corresponding to the shared memory object, for example, whether the sub-region is currently locked by read or write.
For another example, the running status of the corresponding thread may also be queried according to the identifier of the thread object, such as querying whether the thread is currently in a running status (run), a stop status (stop), or a pause status (pause).
Furthermore, the object management functions provided by the Multi-thread.
The following describes a process of performing information interaction between threads based on a shared memory with reference to the accompanying drawings.
For convenience of description, the following description will take the example that the first thread object and the second thread object perform information interaction based on the shared memory area as an example. For the method for creating the first thread object, the second thread object and the shared memory object, reference may be made to the foregoing embodiments, and other methods may be used to create the first thread object, the second thread object and the shared memory object. After the first thread object is created, executing a start () method, namely starting the first thread to start running, wherein the first thread can execute the method contained in the first thread object; after the first thread object is created, the start () method is executed, that is, the second thread is started to start running, and the second thread can execute the method contained in the second thread object. The first thread and the second thread may be two threads in the same application program, such as a downloading thread and a playing thread in the music playing application program, or may belong to different application programs respectively.
Referring to fig. 3, a flow for a first thread to write data into a shared memory area according to an embodiment of the present application may include the following steps:
step 310: the first thread writes data to the shared memory region.
Step 320: and the first thread sends the position information of the written data in the shared memory area to the second thread.
The data location information sent by the first thread to the second thread may include: the starting position of the data written by the first thread in the shared memory area and the size of the memory area occupied by the written data are used for enabling the second thread to read the data according to the position information.
According to the foregoing description, the thread object may include a message passing method for passing messages between threads, so in step 320, the first thread may call the message passing method included in the first thread object to send the location information of the written data in the shared memory area to the second thread. The input parameters in the message passing method may include: the identification of the second thread and the position information of the data written by the first thread in the shared memory area.
Further, in order to avoid a conflict caused by the simultaneous read-write operation of multiple threads on the shared memory area and ensure the security of data, the embodiment of the present application adopts a protection mechanism for locking the shared memory area.
Specifically, the following steps 305 may be performed before step 310: the first thread adds a write lock to a sub-area with a corresponding size in the shared memory area according to the size of the data to be written, so that the thread which does not apply for the write permission can be blocked from writing data into the memory area added with the write lock or reading data from the memory area added with the write lock; after step 310, step 315 may be performed: the first thread releases the applied write lock to release the read-write limitation of other threads to the shared memory area. Of course, step 315 may be performed before step 320, after step 320, or simultaneously with step 320, which is not limited in this embodiment of the application.
Further, before locking the sub-region, the first thread may query the read-write state of the sub-region, and may apply for the write permission of the sub-region only when it is determined that the sub-region is not locked, so as to lock the sub-region, otherwise, may wait for the next write permission application.
Specifically, before step 305, step 302 may also be included: the first thread queries the read-write state of the shared memory area, and more specifically, queries the read-write state of a sub-area with a corresponding size in the shared memory area according to the size of data to be written; accordingly, in step 305, if the first thread object queries that the sub-region is not locked, then the write lock is applied to the sub-region.
Referring to fig. 4, a flow for a second thread to read data from a shared memory area according to an embodiment of the present application may include the following steps:
step 410: and the second thread receives the position information of the data written by the first thread in the shared memory area, which is sent by the first thread.
The specific content of the location information and the transmission mode of the location information are the same as those described above.
Step 420: and the second thread reads the data written by the first thread from the shared memory area according to the position information.
Further, in order to avoid a conflict caused by the simultaneous read-write operation of multiple threads on the shared memory area and ensure the security of data, the embodiment of the present application adopts a protection mechanism for locking the shared memory area.
Specifically, the following step 415 may be performed before step 420: the second thread adds a read lock to a corresponding sub-area in the shared memory area according to the received data position information, so that any thread can be blocked from writing data into the memory area added with the read lock, namely, the memory area is in a read-only state, and any thread can read the data of the memory area but cannot write the data into the memory area; after step 420, step 425 may be performed: and the second thread removes the added read lock so as to remove the read-write limitation of other threads on the shared memory area.
As can be seen from the above description, in the above embodiments of the present application, since the memory area corresponding to the shared memory object is located outside the memory area managed by the split object in the JavaScript engine, and can be shared and used by multiple threads, that is, multiple threads can implement information interaction based on the shared memory, compared with the problem that creating multiple threads based on the split object results in high communication complexity and low efficiency between threads, with the above embodiments of the present application, the memory area shared and used by the multi-threaded object is created outside the memory area managed by the split object, so that different threads implement information interaction based on the shared memory object, thereby reducing the complexity of cross-thread communication and improving communication efficiency.
In order to more clearly understand the foregoing implementation process of the embodiment of the present application, a specific implementation process of the embodiment of the present application is described below by taking the foregoing music playing application as an example.
After the music playing application program is started, because the application program needs to create at least 2 sub-threads (a download thread and a play thread), a Multi-thread code module (addon) is loaded by a main thread of the application program, so that a thread object and a shared memory object are created based on the code module, and information interaction among threads is realized by adopting the scheme provided by the embodiment of the application. Of course, if the launched application does not require multiple child threads, the code module may not be loaded.
The main thread of the music playing application calls a shared memory object creation method to create a shared memory object (hereinafter referred to as buffer). The main thread of the music playing application program calls a thread object creating method to create a downloading thread object and a playing thread object, wherein the downloading thread object at least comprises a music data downloading method and a message passing method, and the playing thread object at least comprises a music playing method and a message passing method. The creation methods of these objects can be referred to the aforementioned embodiments and are not described in detail here.
Further, the main thread of the music playing application may associate the downlink thread object (downloadThread object) and the play thread object (playlthread object) with the shared memory object (buffer object), respectively, so that the downloadThread object and the playlthread object may perform read/write operations on the memory area corresponding to the buffer object. Specifically, the main thread of the music playing application program may assign a token attribute value of the buffer object to a buffer token attribute in the downloadThread object and a buffer token attribute in the playlthread object, respectively, so as to realize that the downloadThread object and the playthread object are associated with the buffer object, respectively.
Further, the main thread of the music playing application may also associate a downloadThread object with a playerThread object. Specifically, the main thread of the music playing application program may assign a token attribute value of the playthreadthread object to a playthreadopen attribute in the downloadThread object, thereby implementing association between the downloadThread object and the playthreadthread object.
The main thread of the music playing application executes a start () method for the downloadThread object, thereby starting a download thread (downloadThread), which can execute the method contained in the downloadThread object, and because the downloadThread object is associated with the buffer object, the downloadThread can also execute the method contained in the buffer object; the main thread of the music playing application executes a start () method with respect to the playthread object, thereby starting a playing thread (playthread), which can execute a method included in the playthread object, and since the playthread object is associated with the buffer object, the playthread can also execute a method included in the buffer object.
After the downloadThread is started, the following operations may be performed first:
acquiring a corresponding memory area according to a buffer token attribute value in the downloadThread object (the attribute value is equal to the token attribute value of the buffer object);
and acquiring the corresponding playerThread according to the playerThreadTimken attribute value in the downloadThread, wherein the attribute value is equal to the token attribute value of the playerThread.
Then, music data is downloaded by executing the method in the downloadThread object, and the downloaded music data is written into the memory area corresponding to the bufferToken attribute value by executing the method in the buffer object. In the process, the downloadThread writes the chunk with the size of chunkSize into the memory area each time until all the downloaded music data are written into the memory area. FIG. 5 shows a process of a write operation, which may include the following steps, as shown:
step 501: and querying the read-write state of the memory area corresponding to the buffer object by using the downloadThread to query whether the sub-area with the starting position of start0 and the size of chunkSize in the memory area is locked currently.
Step 502: if the starting position is start0, and the sub-region with the size of chunkSize is not locked currently, the downloadThread calls the lockRW () method in the buffer object to write the lock to the sub-region, and obtain the write right. The value of the start parameter in the lockRW () method is start0, and the value of the size parameter is chunkSize.
Step 503: the buffer object returns a lock object.
Step 504: the downloadThread calls the write () method in the buffer object to write data to the sub-region. The value of the lock parameter in the write () method is the name of the lock object returned in step 503 or the token attribute value of the lock object (the token attribute value of the lock object is used for uniquely identifying one lock object), and the value of the data parameter is the binary sequence of the audio data to be written. Since the lock object corresponds to a sub-area having a start position of start0 and a size of chunkSize, audio data can be written to the sub-area by the write () method.
Step 505: and calling an unlock () method in the buffer object by the downloadThread, unlocking the sub-region and releasing the write permission. The value of the lock parameter in the unlock () method is the name of the lock object returned in step 503 or the token attribute value of the lock object. Since the lock object corresponds to a sub-area having a start position of start0 and a size of chunkSize, the write lock applied to the sub-area can be released by the unlock () method.
Step 506: and the downloadThread calls a sendmessage () method, and sends the position information of the sub-region where the data block written in the current writing operation process is located to the placertthread. The value of the token parameter in the sendmessage () method is the token attribute value of the playthread object, and the message parameter in the sendmessage () method includes a parameter representing data position information: start0 and chunkSize.
After the playerThread is started, the following operations may be performed first:
and acquiring a corresponding memory area according to the buffer token attribute value in the playerThread object (the attribute value is equal to the token attribute value of the buffer object).
Then, the playerThread reads data from the corresponding position in the memory area according to the data position information sent by the downloadThread and plays the data. FIG. 6 shows a process of a read operation, which may include the following steps, as shown:
step 601: and (3) the playerThread calls a lockRD () method in the buffer object to add a read lock to the sub-region to obtain the read right. The value of the start parameter in the lockRD () method is start0, and the value of the size parameter is chunkSize.
Step 602: the buffer object returns a lock object.
Step 603: and the playerThread calls a read () method in the buffer object, and reads data from the corresponding sub-area according to the position information. The value of the lock parameter in the read () method is the name of the lock object returned in step 602 or the token attribute value of the lock object (the token attribute value of the lock object is used for uniquely identifying one lock object), and the value of the data parameter is null. Since the lock object corresponds to a sub-region with a start position of start0 and a size of chunkSize, audio data can be read from the sub-region by a read () method, and the read audio data is assigned to the data parameter for returning.
Step 604: the buffer object returns the read data to the playlthread, and the data can be assigned to the output parameter data to be returned.
Step 605: and the playerThread calls an unlock () method in the buffer object to unlock the sub-region and release the read permission. The playerThread can play the read music data. The value of the lock parameter in the unlock () method is the name of the lock object returned in step 602 or the token attribute value of the lock object. Since the lock object corresponds to a sub-region having a start position of start0 and a size of chunkSize, the read lock added to the sub-region can be released by the unlock () method.
As can be seen from the above description, in the above embodiments of the present application, since the memory area corresponding to the shared memory object is different from the memory area managed by the memory management object in the dynamic language engine and can be shared and used by multiple threads, that is, multiple threads can implement information interaction based on the memory area corresponding to the shared memory object, compared with the problem of high communication complexity and low efficiency between threads caused by creating multiple threads based on the memory management object in the dynamic language engine, with the above embodiments of the present application, a memory area shared and used by a multi-thread object is created outside the memory area managed by the memory management object in the dynamic language engine, so that different threads implement information interaction based on the shared memory object, thereby reducing the complexity of cross-thread communication and improving communication efficiency.
Based on the same technical concept, the embodiment of the present application further provides an object management device, which can implement the object management function described in the foregoing embodiment.
Referring to fig. 7, a schematic structural diagram of an object management apparatus provided in an embodiment of the present application is shown, where the apparatus may include: a shared memory object management module 701 and a thread object management module 702, wherein:
a shared memory object management module 701, configured to create a shared memory object, where a memory region corresponding to the shared memory object is different from a memory region managed by a memory management object in a dynamic language engine, and the memory management object is used to manage a memory region used by a thread; a thread object management module 702, configured to create a thread object, where the thread object is associated with the shared memory object, so that a thread corresponding to the thread object shares a memory area corresponding to the shared memory object.
Optionally, the thread object includes a memory attribute, and the shared memory object includes a shared memory object identifier attribute; the thread object management module 702 is specifically configured to: and setting the value of the memory attribute in the thread object to be the same as the value of the shared memory object identification attribute in the shared memory object.
Optionally, the shared memory object includes the following attributes: and the shared memory object identifier is used for uniquely identifying the shared memory object. The shared memory object comprises the following methods:
the data writing method is used for writing data into a memory area corresponding to the shared memory object;
and the data reading method is used for reading data from the memory area corresponding to the shared memory object.
Further, the shared memory object further includes the following method:
the write lock method is used for adding a write lock to a memory area corresponding to the shared memory object or a designated sub-area in the memory area so as to block threads which do not acquire write permission from writing data into the memory area to be added with the write lock or reading data from the memory area to be added with the write lock;
and the unlocking method is used for releasing the added write lock.
The input parameters of the write lock method comprise: the starting position of the designated sub-region in the memory region corresponding to the shared memory object, and the size of the designated sub-region.
Optionally, the shared memory object further includes the following method:
a read lock adding method, configured to add a read lock to a memory region corresponding to the shared memory object or to a designated sub-region in the memory region, so as to block other threads except for invoking the read lock adding method from writing data in the memory region to which the read lock is added;
and the unlocking method is used for releasing the added read lock.
The input parameters of the locking method comprise: the starting position of the designated sub-region in the memory region corresponding to the shared memory object, and the size of the designated sub-region.
Optionally, the following attributes are included in the thread object: and the thread object identifier is used for uniquely identifying the thread object. The thread object comprises the following methods: a message passing method for passing messages between thread objects.
The message transmission method comprises the following input parameters: identification of the target thread, and message content passed to the target thread.
Optionally, the system further includes a code library loading module 703, configured to load a code library, where the code library includes: the method comprises the steps that codes used for creating a shared memory object and a corresponding first interface, and codes used for creating a thread object and a corresponding second interface;
the shared memory object management module 701 is specifically configured to: calling the first interface, and creating to obtain a shared memory object; the thread object management module 702 is specifically configured to: and calling the second interface to create and obtain a thread object. When the shared memory object management module 701 calls the first interface, the following operations are performed:
taking the size of an initially allocated memory area and the size of a reserved memory area as input parameters, so that the size of a memory area corresponding to a created shared memory object is the same as the size of the initially allocated memory area, and the size of a memory area reserved for the shared memory object is the same as the size of the reserved memory area; alternatively, the first and second electrodes may be,
taking the size of an initial allocation memory area as an input parameter, so that the size of a memory area corresponding to the created shared memory object is the same as the size of the initial allocation memory area; alternatively, the first and second electrodes may be,
and taking the size of the reserved memory area as an input parameter, so that the size of the memory area corresponding to the created shared memory object is the same as the size of a default set memory area, and the size of the memory area reserved for the shared memory object is the same as the size of the reserved memory area.
Optionally, the number of the shared memory objects created by the shared memory object management module 701 is one or more; the thread object management module 702 is further configured to: if the number of the shared memory objects created by the shared memory object management module 701 is multiple, the created thread objects are grouped, the thread object in each group is associated with one shared memory object, and the shared memory objects associated with the thread objects in different groups are different from each other.
Based on the same technical concept, the embodiment of the present application further provides an information interaction apparatus, which can implement the information interaction process of the first thread in the foregoing embodiment.
Fig. 8 is a schematic structural diagram of an information interaction device according to an embodiment of the present application. The apparatus may include: a write operation module 801 and an interaction module 802, wherein:
a write operation module 801, configured to write data into the shared memory area; the shared memory area is different from the memory area managed by the memory management object in the dynamic language engine and can be shared and used by a plurality of threads, and the memory management object is used for managing the memory area used by the threads;
an interaction module 802, configured to send the location information of the written data in the shared memory area to a second thread.
Optionally, the location information includes: the starting position of the data written by the write operation module in the shared memory area and the size of the memory area occupied by the written data.
Optionally, the interaction module 802 is specifically configured to: sending the position information of the written data in the shared memory area to a second thread by executing a message transmission method defined in a first thread object; wherein the following parameters are used as input parameters of the message passing method: and the identifier of the second thread and the position information of the data written by the first thread in the shared memory area.
Optionally, the write operation module 801 is further configured to: before writing data into a shared memory area, obtaining write permission of a sub-area with a corresponding size in the shared memory area according to the size of the data to be written; and after data are written into the shared memory area, the write permission of the sub-area is released.
Optionally, the write operation module 801 is specifically configured to: adding a write lock to a sub-area in the shared memory area, wherein the sub-area has a size corresponding to the size of the data to be written, by executing a write lock method in the memory management object; the size of data to be written is used as an input parameter of the write locking method; releasing the added write lock by executing an unlocking method in the memory management object; and taking a lock object returned by the write lock adding method as an input parameter of the unlocking method, wherein the lock object is used for indicating a memory area added with a write lock.
Based on the same technical concept, the embodiment of the present application further provides an information interaction apparatus, which can implement the information interaction process of the second thread in the foregoing embodiment.
Referring to fig. 9, a schematic structural diagram of an information interaction device provided in an embodiment of the present application is shown, where the device may include: an interaction module 901 and a read operation module 902, wherein:
an interaction module 901, configured to receive location information, in a shared memory area, of data written in the shared memory by a first thread, where the data is sent by the first thread; the memory area is different from the memory area managed by the memory management object in the dynamic language engine and can be shared and used by a plurality of threads, and the memory management object is used for managing the memory area used by the threads;
a read operation module 902, configured to read data written by the first thread from the shared memory area according to the location information.
Optionally, the location information includes: the starting position of the data written by the first thread in the shared memory area and the size of the memory area occupied by the written data.
Optionally, the interaction module 901 is specifically configured to: and receiving a message sent by the first thread, wherein the message carries the position information of the data written by the first thread in the shared memory area.
Optionally, the read operation module 902 is further configured to: according to the position information, before the data written by the first thread is read from the shared memory area, the read permission of the corresponding sub-area in the shared memory area is obtained according to the position information; and after the data written by the first thread is read from the shared memory area, the read permission of the plurality of sub-areas is released.
Optionally, the read operation module 902 is specifically configured to: adding a read lock to a sub-region with a corresponding size in the shared memory region by executing a read lock adding method in a memory management object; the position information is used as an input parameter of the locking method; releasing the added read lock by executing an unlocking method in the memory management object; and taking a lock object returned by the reading and writing locking method as an input parameter of the unlocking method, wherein the lock object is used for indicating a memory area added with reading and writing.
Based on the same technical concept, the embodiment of the present application further provides a computer device, and the computer device can implement the information interaction flow described in the foregoing embodiment.
Referring to fig. 10, a schematic structural diagram of a computer device provided in the embodiment of the present application, where the computer device can implement the information interaction process described in the foregoing embodiment, and the computer device may generally include: the processor 1001, the memory 1002, and further may include a display 1003.
The processor 1001 may be, among other things, a general-purpose processor (such as a microprocessor or any conventional processor, etc.), a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The memory 1002 may specifically include an internal memory and/or an external memory, such as a random access memory, a flash memory, a read only memory, a programmable read only memory or an electrically erasable programmable memory, a register, and other storage media that are well known in the art.
Data communication connections exist between the processor 1001 and other modules, and data communication can be performed based on a bus architecture, for example. The bus architecture may include any number of interconnected buses and bridges, with one or more processors, represented by the processor 1001, and various circuits, represented by the memory 1002, being linked together. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. The processor 1001 is responsible for managing the bus architecture and general processing, and the memory 1002 may store data used by the processor 1001 in performing operations.
The processes disclosed in the embodiments of the present application may be applied to the processor 1001, or implemented by the processor 1001. In implementation, the steps may be performed by instructions in the form of hardware, integrated logic circuits, or software in the processor 1001. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art.
Specifically, the processor 1001, coupled to the memory 1002, is configured to read the computer program instructions stored by the memory 1002 and, in response, perform the following:
writing data into the shared memory area by the first thread; the shared memory area is different from the memory area managed by the memory management object in the dynamic language engine and can be shared and used by a plurality of threads, and the memory management object is used for managing the memory area used by the threads; and the first thread sends the position information of the written data in the shared memory area to the second thread.
Optionally, in the processor 1001, the first thread may send the location information of the written data in the shared memory area to the second thread by executing a message passing method defined in the first thread object; wherein the first thread takes the following parameters as input parameters of the message passing method: and the identifier of the second thread and the position information of the data written by the first thread in the shared memory area.
Optionally, before the first thread writes data to the shared memory area in the processor 1001, the method further includes: obtaining the write permission of a sub-area with a corresponding size in the shared memory area according to the size of the data to be written, and after writing the data in the shared memory area, the method further comprises the following steps: and releasing the write permission of the sub-area.
Optionally, in the processor 1001, the first thread may add a write lock to a sub-area of the shared memory area, the sub-area having a size corresponding to a size occupied by the data to be written, by executing a write lock method in the memory management object; the first thread takes the size of data to be written as an input parameter of the write lock method; the first thread can remove the added write lock by executing the unlocking method in the memory management object; and the first thread takes a lock object returned by the write lock adding method as an input parameter of the unlocking method, and the lock object is used for indicating a memory area added with a write lock.
Optionally, in the processor 1001, before the first thread obtains the write permission to the sub-region of the corresponding size in the shared memory region, the read-write state of the sub-region in the shared memory region may also be queried, and if the first thread queries that the write permission is not obtained by other threads in the sub-region, the write permission to the sub-region is obtained.
The specific implementation of the above process can be seen in the foregoing embodiments, and is not repeated here.
Based on the same technical concept, the embodiment of the application also provides an operating system.
Referring to fig. 11, a schematic structural diagram of an operating system provided in the embodiment of the present application is shown, and as shown in the drawing, the operating system may be abstracted as the following structure: an application layer, a runtime framework layer, and a system kernel, wherein:
the runtime framework layer comprises a dynamic language engine and a code base, wherein the code base comprises: code for creating a shared memory object, and code for creating a thread object; the shared memory object created by the code for creating the shared memory object has a corresponding memory area different from a memory area managed by a memory management object in a dynamic language engine, wherein the memory management object is used for managing the memory area used by a thread; and associating the thread object created by the code for creating the thread object with the shared memory object so that the thread corresponding to the thread object shares the memory area corresponding to the shared memory object.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (39)

1. An information interaction method is applied to an operating system, and the operating system comprises: the system comprises an application layer, a runtime framework layer and a system kernel, wherein the runtime framework layer comprises a dynamic language engine and a code base, and the code base comprises: code for creating a shared memory object, and code for creating a thread object; the shared memory object created by the code for creating the shared memory object has a corresponding shared memory area different from a memory area managed by a memory management object in the dynamic language engine, wherein the memory management object is used for managing the memory area used by the thread; creating a thread object through the code for creating the thread object, associating the thread object with the shared memory object to enable the thread corresponding to the thread object to share and use a shared memory area corresponding to the shared memory object, and starting a corresponding thread to start running after the creation of the thread object is completed, wherein the thread object associated with the shared memory object comprises a first thread object and a second thread object, the first thread object corresponds to a first thread, and the second thread object corresponds to a second thread; the method comprises the following steps:
writing data into the shared memory area by the first thread;
the first thread sends the position information of the written data in the shared memory area to a second thread, and the method comprises the following steps: the first thread sends the position information of the written data in the shared memory area to a second thread by executing a message transmission method defined in a first thread object; wherein the first thread takes the following parameters as input parameters of the message passing method: and the identifier of the second thread and the position information of the data written by the first thread in the shared memory area.
2. The method of claim 1, wherein the location information comprises:
the starting position of the data written by the first thread in the shared memory area and the size of the memory area occupied by the written data.
3. The method of claim 1 or 2, wherein prior to the first thread writing data to the shared memory region, further comprising:
the first thread acquires the write permission of a sub-area with a corresponding size in the shared memory area according to the size of data to be written;
after the first thread writes data into the shared memory area, the method further includes:
and the first thread relieves the write permission of the sub-area.
4. The method as claimed in claim 3, wherein the obtaining, by the first thread, the write permission for the sub-area of the corresponding size in the shared memory area according to the size of the data to be written comprises:
the first thread adds a write lock to a sub-area in the shared memory area, wherein the sub-area corresponds to the size of the data to be written, by executing a write lock method in the memory management object; the first thread takes the size of data to be written as an input parameter of the write lock method;
the first thread removes write permission to the sub-region, including:
the first thread releases the added write lock by executing an unlocking method in the memory management object; and the first thread takes a lock object returned by the write lock adding method as an input parameter of the unlocking method, and the lock object is used for indicating a memory area added with a write lock.
5. The method of claim 3, wherein prior to the first thread obtaining write permission to a correspondingly sized sub-region of the shared memory region, further comprising:
the first thread queries the read-write state of the sub-region in the shared memory region;
the obtaining, by the first thread, write permission to a sub-region of a corresponding size in the shared memory region includes:
and if the first thread queries that the sub-region is not subjected to the write permission obtained by other threads, obtaining the write permission of the sub-region.
6. An information interaction method is applied to an operating system, and the operating system comprises: the system comprises an application layer, a runtime framework layer and a system kernel, wherein the runtime framework layer comprises a dynamic language engine and a code base, and the code base comprises: code for creating a shared memory object, and code for creating a thread object; the shared memory object created by the code for creating the shared memory object has a corresponding shared memory area different from a memory area managed by a memory management object in the dynamic language engine, wherein the memory management object is used for managing the memory area used by the thread; creating a thread object through the code for creating the thread object, associating the thread object with the shared memory object to enable the thread corresponding to the thread object to share and use a shared memory area corresponding to the shared memory object, and starting a corresponding thread to start running after the creation of the thread object is completed, wherein the thread object associated with the shared memory object comprises a first thread object and a second thread object, the first thread object corresponds to a first thread, and the second thread object corresponds to a second thread; the method comprises the following steps:
the second thread receives the position information of the data written by the first thread in the shared memory area, which is sent by the first thread, and the position information comprises the following steps: the second thread receives a message sent by the first thread through a message transmission method defined in the first thread object, wherein the message carries position information of data written by the first thread in the shared memory area;
and the second thread reads the data written by the first thread from the shared memory area according to the position information.
7. The method of claim 6, wherein the location information comprises:
the starting position of the data written by the first thread in the shared memory area and the size of the memory area occupied by the written data.
8. The method as claimed in claim 6 or 7, wherein before the second thread reads the data written by the first thread from the shared memory area according to the location information, the method further comprises:
the second thread acquires the read permission of the corresponding sub-area in the shared memory area according to the position information;
after the second thread reads the data written by the first thread from the shared memory area, the method further includes:
the second thread relieves read permission of a plurality of the sub-regions.
9. The method of claim 8, wherein obtaining, by the second thread, read permission for a sub-region of a corresponding size in the shared memory region based on the location information comprises:
the second thread adds a read lock to a sub-region with a corresponding size in the shared memory region by executing a read lock adding method in the memory management object; the second thread takes the position information as an input parameter of the reading locking method;
the second thread removing the read permission of the sub-area comprises:
the second thread releases the added read lock by executing an unlocking method in the memory management object; and the second thread takes a lock object returned by the reading and writing lock method as an input parameter of the unlocking method, and the lock object is used for indicating a memory area added with reading and writing.
10. An object management method, applied to an operating system, the operating system comprising: the system comprises an application layer, a runtime framework layer and a system kernel, wherein the runtime framework layer comprises a dynamic language engine and a code base, and the code base comprises: code for creating a shared memory object, and code for creating a thread object; the method comprises the following steps:
creating a shared memory object through a code used for creating the shared memory object in the code base, wherein a memory area corresponding to the shared memory object is different from a memory area managed by a memory management object in a dynamic language engine, and the memory management object is used for managing the memory area used by a thread;
creating a thread object through a code used for creating the thread object in the code base, wherein the thread object is associated with the shared memory object so that a thread corresponding to the thread object shares a memory area corresponding to the shared memory object, and starting a corresponding thread to start running after the thread object is created;
wherein, the thread object comprises the following attributes:
the thread object identifier is used for uniquely identifying the thread object;
the thread object comprises the following methods:
a message transfer method for transferring messages between thread objects;
and the message passing method comprises the following input parameters:
identification of the target thread;
the content of the message passed to the target thread.
11. The method of claim 10, wherein the thread object includes a memory attribute, and the shared memory object includes a shared memory object identification attribute;
associating the thread object with the shared memory object by:
and setting the value of the memory attribute in the thread object to be the same as the value of the shared memory object identification attribute in the shared memory object.
12. The method of claim 10 or 11, wherein the shared memory object includes the following attributes:
the shared memory object identifier is used for uniquely identifying the shared memory object;
the shared memory object comprises the following methods:
the data writing method is used for writing data into a memory area corresponding to the shared memory object;
and the data reading method is used for reading data from the memory area corresponding to the shared memory object.
13. The method of claim 12, wherein the shared memory object further comprises the method of:
the write lock method is used for adding a write lock to a memory area corresponding to the shared memory object or a designated sub-area in the memory area so as to block threads which do not acquire write permission from writing data into the memory area to be added with the write lock or reading data from the memory area to be added with the write lock;
and the unlocking method is used for releasing the added write lock.
14. The method of claim 13, wherein the input parameters for the write lock method include: the starting position of the designated sub-region in the memory region corresponding to the shared memory object, and the size of the designated sub-region.
15. The method of claim 12, wherein the shared memory object further comprises the method of:
a read lock adding method, configured to add a read lock to a memory region corresponding to the shared memory object or to a designated sub-region in the memory region, so as to block other threads except for invoking the read lock adding method from writing data in the memory region to which the read lock is added;
and the unlocking method is used for releasing the added read lock.
16. The method of claim 15, wherein the input parameters for the locking method include: the starting position of the designated sub-region in the memory region corresponding to the shared memory object, and the size of the designated sub-region.
17. The method of claim 10, further comprising: loading a code base, wherein the code base comprises: the method comprises the steps that codes used for creating a shared memory object and a corresponding first interface, and codes used for creating a thread object and a corresponding second interface;
creating a shared memory object, comprising: calling the first interface, and creating to obtain a shared memory object;
creating a thread object, comprising: and calling the second interface to create and obtain a thread object.
18. The method according to claim 17, wherein the size of an initially allocated memory region and the size of a reserved memory region are used as input parameters when the first interface is called, so that the size of a memory region corresponding to the created shared memory object is the same as the size of the initially allocated memory region, and the size of a memory region reserved for the shared memory object is the same as the size of the reserved memory region; alternatively, the first and second electrodes may be,
when the first interface is called, the size of an initially allocated memory area is used as an input parameter, so that the size of a memory area corresponding to the created shared memory object is the same as the size of the initially allocated memory area; alternatively, the first and second electrodes may be,
and when the first interface is called, the size of the reserved memory area is used as an input parameter, so that the size of the memory area corresponding to the created shared memory object is the same as the default set memory area, and the size of the memory area reserved for the shared memory object is the same as the size of the reserved memory area.
19. The method of claim 10, wherein after creating the shared memory object, further comprising:
and allocating all or part of the memory areas reserved for the shared memory object to the shared memory object.
20. The method of claim 10, wherein the number of shared memory objects created is one or more;
and if the number of the created shared memory objects is multiple, grouping the created thread objects, wherein the thread objects in each group are associated with one shared memory object, and the shared memory objects associated with the thread objects in different groups are different from each other.
21. An information interaction device, which is a device corresponding to a first thread object and is suitable for an operating system, wherein the operating system comprises: the system comprises an application layer, a runtime framework layer and a system kernel, wherein the runtime framework layer comprises a dynamic language engine and a code base, and the code base comprises: code for creating a shared memory object, and code for creating a thread object; the shared memory object created by the code for creating the shared memory object has a corresponding shared memory area different from a memory area managed by a memory management object in the dynamic language engine, wherein the memory management object is used for managing the memory area used by the thread; creating a thread object through the code for creating the thread object, associating the thread object with the shared memory object to enable the thread corresponding to the thread object to share and use a shared memory area corresponding to the shared memory object, and starting a corresponding thread to start running after the creation of the thread object is completed, wherein the thread object associated with the shared memory object comprises a first thread object and a second thread object, the first thread object corresponds to a first thread, and the second thread object corresponds to a second thread; characterized in that the device comprises:
the write operation module is used for writing data into the shared memory area;
an interaction module, configured to send location information of the written data in the shared memory area to a second thread, where the interaction module includes: sending the position information of the written data in the shared memory area to a second thread by executing a message transmission method defined in a first thread object; wherein the first thread takes the following parameters as input parameters of the message passing method: and the identifier of the second thread and the position information of the data written by the first thread in the shared memory area.
22. The apparatus of claim 21, wherein the location information comprises:
the starting position of the data written by the write operation module in the shared memory area and the size of the memory area occupied by the written data.
23. The apparatus of claim 21 or 22, wherein the write operation module is further to:
before writing data into a shared memory area, obtaining write permission of a sub-area with a corresponding size in the shared memory area according to the size of the data to be written;
and after data are written into the shared memory area, the write permission of the sub-area is released.
24. The apparatus of claim 23, wherein the write operation module is specifically configured to:
adding a write lock to a sub-area in the shared memory area, wherein the sub-area has a size corresponding to the size of the data to be written, by executing a write lock method in the memory management object; the size of data to be written is used as an input parameter of the write locking method;
releasing the added write lock by executing an unlocking method in the memory management object; and taking a lock object returned by the write lock adding method as an input parameter of the unlocking method, wherein the lock object is used for indicating a memory area added with a write lock.
25. An information interaction device, which is a device corresponding to a second thread object and is suitable for an operating system, wherein the operating system comprises: the system comprises an application layer, a runtime framework layer and a system kernel, wherein the runtime framework layer comprises a dynamic language engine and a code base, and the code base comprises: code for creating a shared memory object, and code for creating a thread object; the shared memory object created by the code for creating the shared memory object has a corresponding shared memory area different from a memory area managed by a memory management object in the dynamic language engine, wherein the memory management object is used for managing the memory area used by the thread; creating a thread object through the code for creating the thread object, associating the thread object with the shared memory object to enable the thread corresponding to the thread object to share and use a shared memory area corresponding to the shared memory object, and starting a corresponding thread to start running after the creation of the thread object is completed, wherein the thread object associated with the shared memory object comprises a first thread object and a second thread object, the first thread object corresponds to a first thread, and the second thread object corresponds to a second thread; characterized in that the device comprises:
an interaction module, configured to receive location information, in a shared memory area, of data written by a first thread in a shared memory, where the location information is sent by the first thread, and the interaction module includes: receiving a message sent by the first thread through a message transmission method defined in the first thread object, wherein the message carries position information of data written by the first thread in the shared memory area;
and the read operation module is used for reading the data written by the first thread from the shared memory area according to the position information.
26. The apparatus of claim 25, wherein the location information comprises:
the starting position of the data written by the first thread in the shared memory area and the size of the memory area occupied by the written data.
27. The apparatus of claim 25 or 26, wherein the read operation module is further to:
according to the position information, before the data written by the first thread is read from the shared memory area, the read permission of the corresponding sub-area in the shared memory area is obtained according to the position information;
and after the data written by the first thread is read from the shared memory area, the read permission of the plurality of sub-areas is released.
28. The apparatus of claim 27, wherein the read operation module is specifically configured to:
adding a read lock to a sub-region with a corresponding size in the shared memory region by executing a read lock adding method in a memory management object; the position information is used as an input parameter of the locking method;
releasing the added read lock by executing an unlocking method in the memory management object; and taking a lock object returned by the reading and writing locking method as an input parameter of the unlocking method, wherein the lock object is used for indicating a memory area added with reading and writing.
29. An object management apparatus adapted to an operating system, the operating system comprising: the system comprises an application layer, a runtime framework layer and a system kernel, wherein the runtime framework layer comprises a dynamic language engine and a code base, and the code base comprises: code for creating a shared memory object, and code for creating a thread object; the object management apparatus includes:
a shared memory object management module, configured to create a shared memory object through a code used for creating the shared memory object in the code library, where a memory area corresponding to the shared memory object is different from a memory area managed by a memory management object in a dynamic language engine, and the memory management object is used for managing a memory area used by a thread;
the thread object management module is used for creating a thread object through a code used for creating the thread object in the code base, the thread object is associated with the shared memory object, so that a thread corresponding to the thread object shares a memory area corresponding to the shared memory object, and the corresponding thread is started to run after the thread object is created;
wherein, the thread object comprises the following attributes:
the thread object identifier is used for uniquely identifying the thread object;
the thread object comprises the following methods:
a message transfer method for transferring messages between thread objects;
and the message passing method comprises the following input parameters:
identification of the target thread;
the content of the message passed to the target thread.
30. The apparatus of claim 29, wherein the thread object includes a memory attribute, and wherein the shared memory object includes a shared memory object identification attribute;
the thread object management module is specifically configured to: and setting the value of the memory attribute in the thread object to be the same as the value of the shared memory object identification attribute in the shared memory object.
31. The apparatus according to claim 29 or 30, wherein the shared memory object comprises the following attributes:
the shared memory object identifier is used for uniquely identifying the shared memory object;
the shared memory object comprises the following methods:
the data writing method is used for writing data into a memory area corresponding to the shared memory object;
and the data reading method is used for reading data from the memory area corresponding to the shared memory object.
32. The apparatus as claimed in claim 31, wherein said shared memory object further comprises means for:
the write lock method is used for adding a write lock to a memory area corresponding to the shared memory object or a designated sub-area in the memory area so as to block threads which do not acquire write permission from writing data into the memory area to be added with the write lock or reading data from the memory area to be added with the write lock;
and the unlocking method is used for releasing the added write lock.
33. The apparatus of claim 32, wherein the input parameters for the write lock method comprise: the starting position of the designated sub-region in the memory region corresponding to the shared memory object, and the size of the designated sub-region.
34. The apparatus as claimed in claim 31, wherein said shared memory object further comprises means for:
a read lock adding method, configured to add a read lock to a memory region corresponding to the shared memory object or to a designated sub-region in the memory region, so as to block other threads except for invoking the read lock adding method from writing data in the memory region to which the read lock is added;
and the unlocking method is used for releasing the added read lock.
35. The apparatus of claim 34, wherein the input parameters for the locking method comprise: the starting position of the designated sub-region in the memory region corresponding to the shared memory object, and the size of the designated sub-region.
36. The apparatus of claim 29, further comprising:
a code base loading module, configured to load a code base, where the code base includes: the method comprises the steps that codes used for creating a shared memory object and a corresponding first interface, and codes used for creating a thread object and a corresponding second interface;
the shared memory object management module is specifically configured to: calling the first interface, and creating to obtain a shared memory object;
the memory object management module is specifically configured to: calling the second interface, and creating to obtain a thread object;
when the shared memory object management module calls the first interface, the following operations are executed:
taking the size of an initially allocated memory area and the size of a reserved memory area as input parameters, so that the size of a memory area corresponding to a created shared memory object is the same as the size of the initially allocated memory area, and the size of a memory area reserved for the shared memory object is the same as the size of the reserved memory area; alternatively, the first and second electrodes may be,
taking the size of an initial allocation memory area as an input parameter, so that the size of a memory area corresponding to the created shared memory object is the same as the size of the initial allocation memory area; alternatively, the first and second electrodes may be,
and taking the size of the reserved memory area as an input parameter, so that the size of the memory area corresponding to the created shared memory object is the same as the size of a default set memory area, and the size of the memory area reserved for the shared memory object is the same as the size of the reserved memory area.
37. The apparatus as claimed in claim 29, wherein the number of shared memory objects created by the shared memory object management module is one or more;
the thread object management module is further configured to: if the number of the shared memory objects created by the shared memory object management module is multiple, the created thread objects are grouped, the thread object in each group is associated with one shared memory object, and the shared memory objects associated with the thread objects of different groups are different from each other.
38. An operating system, comprising: an application layer, a runtime framework layer, and a system kernel;
the runtime framework layer comprises a dynamic language engine and a code base, wherein the code base comprises: code for creating a shared memory object, and code for creating a thread object;
the shared memory object created by the code for creating the shared memory object has a corresponding memory area different from a memory area managed by a memory management object in a dynamic language engine, wherein the memory management object is used for managing the memory area used by a thread;
the thread object created by the code for creating the thread object is associated with the shared memory object, so that the thread corresponding to the thread object shares a memory area corresponding to the shared memory object, and the corresponding thread is started to run after the thread object is created;
wherein, the thread object comprises the following attributes:
the thread object identifier is used for uniquely identifying the thread object;
the thread object comprises the following methods:
a message transfer method for transferring messages between thread objects;
and the message passing method comprises the following input parameters:
identification of the target thread;
the content of the message passed to the target thread.
39. A computer device, comprising: a processor, a memory;
a memory for storing computer program instructions adapted to form an operating system comprising: the system comprises an application layer, a runtime framework layer and a system kernel, wherein the runtime framework layer comprises a dynamic language engine and a code base, and the code base comprises: code for creating a shared memory object, and code for creating a thread object; the shared memory object created by the code for creating the shared memory object has a corresponding shared memory area different from a memory area managed by a memory management object in the dynamic language engine, wherein the memory management object is used for managing the memory area used by the thread; creating a thread object through the code for creating the thread object, associating the thread object with the shared memory object to enable the thread corresponding to the thread object to share and use a shared memory area corresponding to the shared memory object, and starting a corresponding thread to start running after the creation of the thread object is completed, wherein the thread object associated with the shared memory object comprises a first thread object and a second thread object, the first thread object corresponds to a first thread, and the second thread object corresponds to a second thread;
a processor, coupled to the memory, for reading computer program instructions stored by the memory and, in response, performing the following:
writing data into the shared memory area by the first thread; the shared memory area is different from the memory area managed by the memory management object in the dynamic language engine and can be shared and used by a plurality of threads, and the memory management object is used for managing the memory area used by the threads;
the first thread sends the position information of the written data in the shared memory area to a second thread, and the method comprises the following steps: the first thread sends the position information of the written data in the shared memory area to a second thread by executing a message transmission method defined in a first thread object; wherein the first thread takes the following parameters as input parameters of the message passing method: and the identifier of the second thread and the position information of the data written by the first thread in the shared memory area.
CN201610983813.6A 2016-11-08 2016-11-08 Information interaction method, object management method, device and system Active CN108062252B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610983813.6A CN108062252B (en) 2016-11-08 2016-11-08 Information interaction method, object management method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610983813.6A CN108062252B (en) 2016-11-08 2016-11-08 Information interaction method, object management method, device and system

Publications (2)

Publication Number Publication Date
CN108062252A CN108062252A (en) 2018-05-22
CN108062252B true CN108062252B (en) 2022-02-01

Family

ID=62136851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610983813.6A Active CN108062252B (en) 2016-11-08 2016-11-08 Information interaction method, object management method, device and system

Country Status (1)

Country Link
CN (1) CN108062252B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109739583B (en) * 2018-12-13 2023-09-08 平安科技(深圳)有限公司 Method, device, computer equipment and storage medium for parallel running of multiple threads
CN109947572B (en) * 2019-03-25 2023-09-05 Oppo广东移动通信有限公司 Communication control method, device, electronic equipment and storage medium
CN110399235B (en) 2019-07-16 2020-07-28 阿里巴巴集团控股有限公司 Multithreading data transmission method and device in TEE system
CN110442462B (en) 2019-07-16 2020-07-28 阿里巴巴集团控股有限公司 Multithreading data transmission method and device in TEE system
US10699015B1 (en) 2020-01-10 2020-06-30 Alibaba Group Holding Limited Method and apparatus for data transmission in a tee system
CN111114320B (en) * 2019-12-27 2022-11-18 深圳市众鸿科技股份有限公司 Vehicle-mounted intelligent cabin sharing display method and system
CN111538585B (en) * 2019-12-31 2022-03-01 明度智云(浙江)科技有限公司 Js-based server process scheduling method, system and device
CN111324461B (en) * 2020-02-20 2023-09-01 西安芯瞳半导体技术有限公司 Memory allocation method, memory allocation device, computer equipment and storage medium
CN111796931B (en) * 2020-06-09 2024-03-29 阿里巴巴集团控股有限公司 Information processing method, device, computing equipment and medium
CN112181679A (en) * 2020-09-13 2021-01-05 中国运载火箭技术研究院 Rocket data processing method and device, computer storage medium and electronic equipment
CN112187887A (en) * 2020-09-14 2021-01-05 北京三快在线科技有限公司 Webpage real-time communication method and device for multiple pages and electronic equipment
CN112597162B (en) * 2020-12-25 2023-08-08 平安银行股份有限公司 Data set acquisition method, system, equipment and storage medium
CN112822193B (en) * 2021-01-05 2023-03-24 网易(杭州)网络有限公司 Application communication method, device, equipment and storage medium
CN117077115B (en) * 2023-10-13 2023-12-15 沐曦集成电路(上海)有限公司 Cross-language multi-process interaction method, electronic equipment and medium in chip verification stage

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101504617A (en) * 2009-03-23 2009-08-12 华为技术有限公司 Data transmitting and receiving method and device based on processor sharing internal memory
CN103034544A (en) * 2012-12-04 2013-04-10 杭州迪普科技有限公司 Management method and device for user mode and kernel mode to share memory
CN103176852A (en) * 2011-12-22 2013-06-26 腾讯科技(深圳)有限公司 Method and device for inter-progress communication
CN103514053A (en) * 2013-09-22 2014-01-15 中国科学院信息工程研究所 Shared-memory-based method for conducting communication among multiple processes
CN104572313A (en) * 2013-10-22 2015-04-29 华为技术有限公司 Inter-process communication method and device
CN105843693A (en) * 2016-03-22 2016-08-10 同济大学 High-speed maglev transportation simulation oriented memory sharing method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7203775B2 (en) * 2003-01-07 2007-04-10 Hewlett-Packard Development Company, L.P. System and method for avoiding deadlock
CN100353325C (en) * 2004-08-23 2007-12-05 华为技术有限公司 Method for realing sharing internal stored data base and internal stored data base system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101504617A (en) * 2009-03-23 2009-08-12 华为技术有限公司 Data transmitting and receiving method and device based on processor sharing internal memory
CN103176852A (en) * 2011-12-22 2013-06-26 腾讯科技(深圳)有限公司 Method and device for inter-progress communication
CN103034544A (en) * 2012-12-04 2013-04-10 杭州迪普科技有限公司 Management method and device for user mode and kernel mode to share memory
CN103514053A (en) * 2013-09-22 2014-01-15 中国科学院信息工程研究所 Shared-memory-based method for conducting communication among multiple processes
CN104572313A (en) * 2013-10-22 2015-04-29 华为技术有限公司 Inter-process communication method and device
CN105843693A (en) * 2016-03-22 2016-08-10 同济大学 High-speed maglev transportation simulation oriented memory sharing method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Buffer management for shared-memory ATM switches;Mutlu Arpaci;John A. Copeland;《Communications Surveys and Tutorials》;20001231;第3卷(第1期);第2-10页 *
多核系统共享内存资源分配和管理研究;高珂,陈荔城,范东睿,刘志勇;《计算机学报》;20150531;第38卷(第5期);第1020-1034页 *

Also Published As

Publication number Publication date
CN108062252A (en) 2018-05-22

Similar Documents

Publication Publication Date Title
CN108062252B (en) Information interaction method, object management method, device and system
CN109032691B (en) Applet running method and device and storage medium
CN101421711B (en) Virtual execution system for resource-constrained devices
US8739147B2 (en) Class isolation to minimize memory usage in a device
US20070234322A1 (en) Dynamic delegation chain for runtime adaptation of a code unit to an environment
CN103970563B (en) The method of dynamic load Android class
CN111221630B (en) Business process processing method, device, equipment, readable storage medium and system
CN105242962A (en) Quick lightweight thread triggering method based on heterogeneous many-core
CN109947643B (en) A/B test-based experimental scheme configuration method, device and equipment
CN104750528A (en) Management method and device for components in Android program
CN113254240B (en) Method, system, device and medium for managing control device
CN113190282A (en) Android operating environment construction method and device
CN113360893A (en) Container-based intelligent contract execution method and device and storage medium
US8041852B1 (en) System and method for using a shared buffer construct in performance of concurrent data-driven tasks
CN112148351A (en) Cross-version compatibility method and system for application software
CN108647087B (en) Method, device, server and storage medium for realizing reentry of PHP kernel
CN111459573A (en) Method and device for starting intelligent contract execution environment
US20220261489A1 (en) Capability management method and computer device
CN115934656A (en) Information processing method, service processing method and device
CN115705212A (en) Data response method and device in system platform and electronic equipment
CN112256249A (en) Method and equipment for expanding Android system function and computer storage medium
CN112214213A (en) Linux kernel development and management method and device, computer equipment and storage medium
CN112306539A (en) Method, system, terminal and medium for developing application layer of single chip microcomputer
CN114116181B (en) Distributed data analysis task scheduling system and method
CN113448588B (en) Data security processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201222

Address after: Room 603, 6 / F, Roche Plaza, 788 Cheung Sha Wan Road, Kowloon, China

Applicant after: Zebra smart travel network (Hong Kong) Limited

Address before: Cayman Islands Grand Cayman capital building, a four storey No. 847 mailbox

Applicant before: Alibaba Group Holding Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant