CN109857573B - Data sharing method, device, equipment and system - Google Patents

Data sharing method, device, equipment and system Download PDF

Info

Publication number
CN109857573B
CN109857573B CN201811653675.0A CN201811653675A CN109857573B CN 109857573 B CN109857573 B CN 109857573B CN 201811653675 A CN201811653675 A CN 201811653675A CN 109857573 B CN109857573 B CN 109857573B
Authority
CN
China
Prior art keywords
service data
area
message
processor
address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811653675.0A
Other languages
Chinese (zh)
Other versions
CN109857573A (en
Inventor
刘毛
刘海军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN201811653675.0A priority Critical patent/CN109857573B/en
Publication of CN109857573A publication Critical patent/CN109857573A/en
Priority to PCT/CN2019/121554 priority patent/WO2020134833A1/en
Application granted granted Critical
Publication of CN109857573B publication Critical patent/CN109857573B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a data sharing method, a device, equipment and a system, wherein the method comprises the following steps: the device divides the shared memory into a first area and a second area through a first processor; the first area is used for storing the service data acquired by the first processor; the second area is used for storing messages containing addresses of the service data; the device dynamically applies for a third area in the first area through the first processor to store first service data, and stores a first message containing an address of the first service data in the second area; the first service data is data in the service data; the first message is a message in the second area; the device reads the first message in the second area through the second processor to read the address of the first service data in the first message, and reads the first service data through the read address of the first service data. By adopting the method and the device, the business data copying can be reduced, and the data processing efficiency and the throughput rate of the system are improved.

Description

Data sharing method, device, equipment and system
Technical Field
The present application relates to the field of communications technologies, and in particular, to a data sharing method, apparatus, device, and system.
Background
In visual processing systems, multiple processors are often used to speed up video frame processing. For example, the application processor and the decoder are used to decode the video frame, and then the video frame is sent to the vision processor and the corresponding accelerator for processing, and after the processing is finished, the application processor sends the video frame to the upper layer application for processing. Due to the large amount of video frame data, data is often transferred between different processors by way of copying. A large number of copies occupy a large amount of system bandwidth, reducing the system throughput and affecting the system performance.
Disclosure of Invention
The application provides a data sharing method, device, equipment and system, which can reduce the copying of service data, avoid the transmission of a large amount of data between processors at the same time point, and improve the data processing efficiency of a data sharing system and the throughput rate of the system.
In a first aspect, the present application provides a data sharing method, including:
the device divides the shared memory into a first area and a second area through a first processor; the first area is used for storing the service data acquired by the first processor; the second area is used for storing a message containing the address of the service data;
the device dynamically applies for a third area in the first area through the first processor to store first service data, and stores a first message containing an address of the first service data in the second area; the first service data is data in the service data; the first message is a message in the second region;
the device reads the first message in the second area through the second processor to read the address of the first service data in the first message, and reads the first service data through the read address of the first service data.
In combination with the first aspect, in some possible embodiments,
before the device dynamically applies for a third area in the first area to store the first service data through the first processor, the device further includes:
the equipment acquires the first service data through the first processor, and writes the first service data into a cache of the equipment through a Map cache mechanism.
In combination with the first aspect, in some possible embodiments,
before the storing the first message containing the address of the first service data in the second area, the method further includes:
and the equipment encapsulates the address of the first service data into a first message containing the address of the first service data through the first processor, and writes the first message into a cache of the equipment through the first processor by utilizing a Map cache mechanism.
In combination with the first aspect, in some possible embodiments,
the device encapsulates, by the first processor, the address of the first service data into a first message containing the address of the first service data, including:
the device encapsulates, by the first processor, the address of the first traffic data into a first message in a message queue that includes the address of the first traffic data.
In combination with the first aspect, in some possible embodiments,
the message contains a number associated with the message; different messages are associated with different numbers;
the first message comprises a first number associated with the first message;
the device reads the first message in the second area through the second processor to read the address of the first service data in the first message, and further includes, after reading the first service data through the read address of the first service data:
the device processes the read first service data through a second processor to obtain a first processing result containing the first number, and writes the first processing result into the second area; the first processing result is used for informing the first processor to release the first service data stored in the cache of the equipment and a first message containing the address of the first service data;
and if the first processing result containing the first number is read from the first area through the first processor, releasing the first service data stored in the cache of the device and a first message containing the address of the first service data through the first processor.
In combination with the first aspect, in some possible embodiments,
if the information cached in the cache of the device reaches the cache capacity of the cache, the first processor releases the information stored in the cache first, and the information includes: the first service data or a first message containing an address of the first service data.
In combination with the first aspect, in some possible embodiments,
if the information cached in the cache of the device reaches the cache capacity of the cache, the first processor releases the information with the lowest priority in the cache, and the information comprises: the first service data or a first message containing an address of the first service data.
In a second aspect, the present application provides a data sharing apparatus, comprising:
an obtaining unit, configured to obtain service data;
the device comprises a dividing unit, a first memory unit and a second memory unit, wherein the dividing unit is used for dividing the shared memory into a first area and a second area; the first area is used for storing the service data acquired by the acquisition unit; the second area is used for storing a message containing the address of the service data;
the application unit is used for dynamically applying for a third area in the first area;
a storage unit, configured to store first service data and a first message including an address of the first service data in the third area dynamically applied by the application unit; the first service data is data in the service data; the first message is a message in the second region;
and a reading unit, configured to read the first message in the second area, so as to read an address of the first service data in the first message, and read the first service data according to the read address of the first service data.
In combination with the second aspect, in some possible embodiments,
further comprising: and the writing unit is used for writing the first service data into a cache of the equipment through a Map cache mechanism after the acquisition unit acquires the first service data before dynamically applying for the third area in the first area to store the first service data.
In combination with the second aspect, in some possible embodiments, the method further includes:
and the encapsulating unit is used for encapsulating the address of the first service data into the first message containing the address of the first service data in the message queue before the first message containing the address of the first service data is stored in the second area.
In combination with the second aspect, in some possible embodiments, the method further includes: a releasing unit, wherein
The message contains a message associated number; different messages are associated with different numbers.
The first message comprises a first number associated with the first message;
after reading the first message in the second area to read the address of the first service data in the first message and reading the first service data through the read address of the first service data,
the writing unit is further used for writing the first processing result into the second area after the read first service data is processed and the first processing result containing the first number is obtained; the first processing result is used for informing the first processor to release the first service data stored in the cache of the device and a first message containing the address of the first service data.
And the releasing unit is used for releasing the first service data stored in the cache of the equipment and the first message containing the address of the first service data after the first processing result containing the first number is read from the first area.
In a third aspect, the present application provides a data sharing device, including an input device, an output device, a processor, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, where the memory is used to store an application program code that supports a device to execute the data sharing method, and the processor is configured to execute the data sharing method provided in the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium for storing one or more computer programs, the one or more computer programs comprising instructions for performing the data sharing method provided in the first aspect when the computer program runs on a computer.
In a fifth aspect, the present application provides a computer program comprising data sharing instructions, which when executed on a computer, utilize the machine learning instructions for performing the data sharing method provided by the first aspect.
The application provides a data sharing method, a data sharing device and data sharing equipment. Firstly, dividing a shared memory into a first area and a second area by a device through a first processor; the first area is used for storing the service data acquired by the first processor; the second area is used for storing messages containing addresses of the service data. Then, the device dynamically applies for a third area in the first area through the first processor to store the first service data, and stores a first message containing an address of the first service data in the second area; the first service data is data in the service data; the first message is a message in the second region. Finally, the device reads the first message in the second area through the second processor to read the address of the first service data in the first message, and reads the first service data through the read address of the first service data. By adopting the method and the device, the second processor reads the address of the first service data from the second area, so that the first service data is read from the first area through the address of the first service data, the sharing of the service data between the second processor and the first processor is realized (in the prior art, the data in the shared memory is shared, or the data in the shared memory is copied), the copying of a large amount of service data between the processors is reduced, the transmission of a large amount of data between the processors at the same time point is avoided (if a large amount of data transmission exists between the processors at the same time, a large amount of system bandwidth is occupied, the system throughput is reduced), and the data processing efficiency of the system and the system throughput are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an architecture of a data sharing system provided in the present application;
FIG. 2 is a schematic diagram of a shared region partition provided herein;
FIG. 3 is a schematic flow chart of data sharing provided herein;
FIG. 4 is a schematic block diagram of an apparatus provided herein;
fig. 5 is a schematic block diagram of an apparatus provided herein.
Detailed Description
The technical solutions in the present application will be described clearly and completely with reference to the accompanying drawings in the present application, and it is obvious that the described embodiments are some, not all embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In particular implementations, devices described herein include, but are not limited to, other portable devices such as mobile phones, laptop computers, or tablet computers having touch sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the device is not a portable communication device, but is a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or touchpad).
In the discussion that follows, a device that includes a display and a touch-sensitive surface is described. However, it should be understood that the device may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The device supports various applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disc burning application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a digital camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications that may be executed on the device may use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the device may be adjusted and/or changed between applications and/or within respective applications. In this way, a common physical architecture (e.g., touch-sensitive surface) of the device can support various applications with user interfaces that are intuitive and transparent to the user.
For a better understanding of the present application, an architecture diagram of a data sharing system to which the present application is applicable is described below. Referring to fig. 1, fig. 1 is an architecture diagram of a data sharing system according to the present application.
As shown in fig. 1, a system may include, but is not limited to: the first processor, the second processor and the shared memory, wherein the shared memory may include but is not limited to: a first region and a second region.
It should be noted that, the shared memory is divided into a first area and a second area by the first processor; the first area is used for storing the service data acquired by the first processor; the second area is used for storing messages containing addresses of the service data.
It should be noted that devices may include, but are not limited to: a first processor and a second processor.
Taking the visual processing system as an example, the first processor may be an application processor and the second processor may be a visual processor. That is, the visual processor and corresponding accelerator may share the video frames decoded by the application processor and decoder.
Sharing data by the first processor and the second processor may include, but is not limited to, the steps of:
(1) and dynamically applying for a third area in the first area through the first processor to store the first service data.
In this embodiment of the application, the third area is a partial memory area in the first area, and is used to store first service data, where the first service data may include but is not limited to: video frames or face pictures and the like.
(2) And encapsulating the address of the first service data into a first message containing the address of the first service data in the message queue through the first processor.
In the embodiment of the application, the message comprises a message association number; different messages are associated with different numbers.
The first message includes a first number associated with the first message.
(3) And writing the first message into a cache of the device by the first processor by using a Map cache mechanism.
(4) And reading the first message in the second area through the second processor to read the address of the first service data in the first message, and reading the first service data through the read address of the first service data.
(5) Processing the read first service data through a second processor to obtain a first processing result containing a first number, and writing the first processing result into a second area; the first processing result is used for informing the first processor to release the first service data stored in the cache of the device and a first message containing the address of the first service data.
(6) And reading a first processing result containing the first number.
In the embodiment of the application, the first processing result is written into the second area; the first processing result is used for informing the first processor to release the first service data stored in the cache of the device and a first message containing the address of the first service data.
(7) And if a first processing result containing the first number is read from the first area through the first processor, releasing the first service data stored in the cache of the equipment and the first message containing the address of the first service data through the first processor.
It should be noted that, if the information cached in the cache of the device reaches the cache capacity of the cache, the first processor releases the information stored in the cache first, and the information includes: first traffic data or a first message containing an address of the first traffic data.
Alternatively, the first and second electrodes may be,
if the information cached in the cache of the device reaches the cache capacity of the cache, the first processor releases the information with the lowest priority in the cache, and the information comprises: first traffic data or a first message containing an address of the first traffic data.
It should be noted that, in order to describe the division of the shared memory in the embodiment of the present application in more detail, the following describes the division of the shared memory in detail with reference to fig. 2.
Fig. 2 illustrates a schematic diagram of a division of a shared area.
As shown in fig. 2, the first processor may divide the shared area into a first area and a second area. The first area is usually disposed in a double data rate Synchronous Dynamic Random Access Memory (DDR SDRAM) with a large capacity, and the second area is usually disposed in a Static Random-Access Memory (SRAM) with a high speed and a small capacity.
Note that data1 is traffic data stored in the first area, and data1 indicates that the address of data1 is stored in the second area. Wherein the Seq may be expressed as a number of a message containing service data.
In the embodiment of the application, firstly, the device divides the shared memory into a first area and a second area through a first processor; the first area is used for storing the service data acquired by the first processor; the second area is used for storing messages containing addresses of the service data. Then, the device dynamically applies for a third area in the first area through the first processor to store the first service data, and stores a first message containing an address of the first service data in the second area; the first service data is data in the service data; the first message is a message in the second region. Finally, the device reads the first message in the second area through the second processor to read the address of the first service data in the first message, and reads the first service data through the read address of the first service data. By adopting the embodiment of the application, the second processor reads the first service data from the first area through reading the address of the first service data from the second area, so that the sharing of the service data between the second processor and the first processor is realized (in the prior art, the data sharing in the shared memory is mainly realized, the data copying mode in the shared memory is adopted), the copying of a large amount of service data between the processors is reduced, the transmission of a large amount of data between the processors at the same time point is avoided (if a large amount of data transmission exists between the processors at the same time, a large amount of system bandwidth is occupied, the system throughput is reduced), and the data processing efficiency of the system and the system throughput are improved.
Fig. 2 is only used to explain the embodiment of the present application, and should not limit the present application.
Referring to fig. 3, a schematic flow chart of a data sharing method provided by the present application is shown in fig. 3, where the method may include at least the following steps:
s301, the device divides the shared memory into a first area and a second area through the first processor.
In the embodiment of the application, the first area is used for storing the service data acquired by the first processor; the second area is used for storing messages containing addresses of the service data.
It should be noted that the data may include, but is not limited to: video frames, face pictures and the like.
It should be noted that the shared memory may include other memory areas besides the first area and the second area.
It should be noted that devices may include, but are not limited to: a first processor and a second processor.
Taking the visual processing system as an example, the first processor may be an application processor and the second processor may be a visual processor. That is, the visual processor and corresponding accelerator may share the video frames decoded by the application processor and decoder.
S302, the device dynamically applies for a third area in the first area through the first processor to store the first service data, and stores a first message containing an address of the first service data in the second area.
In the embodiment of the application, the first service data is data in the service data; the first message is a message in the second region.
Before the device dynamically applies for the third area in the first area through the first processor to store the first service data, the method further includes the following steps:
step 1: the device obtains the first service data through the first processor.
Step 2: the device writes the acquired first service data into a cache of the device through a Map cache mechanism by using the first processor.
Before storing the first message containing the address of the first service data in the second area, the method further comprises the following steps:
step 1: the device encapsulates, by the first processor, the address of the first traffic data into a first message containing the address of the first traffic data.
Specifically, the device encapsulates, by the first processor, the address of the first service data into a first message containing the address of the first service data in the message queue.
Step 2: the device writes the first message into a cache of the device by the first processor using a Map cache mechanism.
S303, the device reads the first message in the second area through the second processor to read the address of the first service data in the first message, and reads the first service data through the read address of the first service data.
It should be noted that the message may include, but is not limited to: containing the address of the service data and the number associated with the message.
Wherein, different messages are associated with different numbers, that is, the number associated with the message is the unique identifier associated with the message; the number may be a string of numbers, words or letters.
It should be noted that the first message contains a first number associated with the first message.
The device reads the first message in the second area through the second processor to read the address of the first service data in the first message, and further includes the following working steps after reading the first service data through the read address of the first service data:
working step 1: and the equipment processes the read first service data through a second processor to obtain a first processing result containing a first number.
And 2, working step: writing the first processing result into the second area; the first processing result may be used to notify the first processor to release the first service data stored in the cache of the device and the first message including the address of the first service data.
And 3, working step: and if the device reads a first processing result containing the first number from the first area through the first processor, releasing the first service data stored in the cache of the device and a first message containing the address of the first service data through the first processor.
It should be noted that, if the information cached in the cache of the device reaches the cache capacity of the cache, the first processor releases the information stored in the cache first, and the information includes: first traffic data or a first message containing an address of the first traffic data. Alternatively, the first and second electrodes may be,
if the information cached in the cache of the device reaches the cache capacity of the cache, the first processor releases the information with the lowest priority in the cache, and the information comprises: first traffic data or a first message containing an address of the first traffic data.
It should be noted that when the message containing the address of the service data in the second area reaches the storage capacity of the second area, the second processor is waited to read the message from the second area.
To sum up, in the embodiment of the present application, first, a device divides a shared memory into a first area and a second area through a first processor; the first area is used for storing the service data acquired by the first processor; the second area is used for storing messages containing addresses of the service data. Then, the device dynamically applies for a third area in the first area through the first processor to store the first service data, and stores a first message containing an address of the first service data in the second area; the first service data is data in the service data; the first message is a message in the second region. Finally, the device reads the first message in the second area through the second processor to read the address of the first service data in the first message, and reads the first service data through the read address of the first service data. By adopting the embodiment of the application, the second processor reads the first service data from the first area through reading the address of the first service data from the second area, so that the sharing of the service data between the second processor and the first processor is realized (in the prior art, the data sharing in the shared memory is mainly realized, the data copying mode in the shared memory is adopted), the copying of a large amount of service data between the processors is reduced, the transmission of a large amount of data between the processors at the same time point is avoided (if a large amount of data transmission exists between the processors at the same time, a large amount of system bandwidth is occupied, the system throughput is reduced), and the data processing efficiency of the system and the system throughput are improved.
It is to be understood that the related definitions and descriptions not provided in the embodiment of the method of fig. 3 refer to the embodiments of fig. 1 and fig. 2, and are not repeated herein.
Referring to fig. 4, a data sharing apparatus provided in the present application is shown. As shown in fig. 4, the apparatus 40 includes: an acquisition unit 401, a division unit 402, an application unit 403, a storage unit 404, and a reading unit 405.
Wherein:
an obtaining unit 401 is configured to obtain service data.
A dividing unit 402, configured to divide the shared memory into a first area and a second area; the first area is used for storing the service data acquired by the acquisition unit; the second area is used for storing messages containing addresses of the service data.
An applying unit 403, configured to dynamically apply for a third area in the first area.
A storage unit 404, configured to store the first service data and the first message including the address of the first service data in a third area to which the application unit 403 dynamically applies; the first service data is data in the service data; the first message is a message in the second region.
The reading unit 405 is configured to read the first message in the second area to read an address of the first service data in the first message, and read the first service data according to the read address of the first service data.
The apparatus 40 comprises: besides the obtaining unit 401, the dividing unit 402, the applying unit 403, the storing unit 404 and the reading unit 405, the method further includes: and writing the unit.
A writing unit, configured to write the first service data into a cache of the device through a Map cache mechanism after the obtaining unit 401 obtains the first service data before dynamically applying for the third area in the first area to store the first service data.
The apparatus 40 comprises: besides the obtaining unit 401, the dividing unit 402, the applying unit 403, the storing unit 404, the reading unit 405, and the writing unit, the method further includes: and (7) packaging the unit.
And the encapsulating unit is used for encapsulating the address of the first service data into the first message containing the address of the first service data in the message queue before the first message containing the address of the first service data is stored in the second area.
And the writing unit is also used for writing the first message into the cache of the equipment by utilizing a Map cache mechanism.
The apparatus 40 comprises: besides the obtaining unit 401, the dividing unit 402, the applying unit 403, the storing unit 404, the reading unit 405, and the writing unit, the method further includes: and releasing the unit.
It should be noted that the message contains the number associated with the message; different messages are associated with different numbers.
The first message comprises a first number associated with the first message;
reading the first message in the second area to read the address of the first service data in the first message, and after reading the first service data by the read address of the first service data,
the writing unit is further used for writing the first processing result into the second area after the read first service data is processed and the first processing result containing the first number is obtained; the first processing result is used for informing the first processor to release the first service data stored in the cache of the device and a first message containing the address of the first service data.
And the releasing unit is used for releasing the first service data stored in the cache of the equipment and the first message containing the address of the first service data after the first processing result containing the first number is read from the first area.
To sum up, in the embodiment of the present application, the apparatus 40 divides the shared memory into the first region and the second region through the dividing unit 402; the first area is used for storing the service data acquired by the acquisition unit 401; the second area is used for storing messages containing addresses of the service data. Then, the device 40 dynamically applies for a third area in the first area through the applying unit 403 to store the first service data, and the device 40 stores the first message containing the address of the first service data in the second area through the storing unit 404; the first service data is data in the service data; the first message is a message in the second region. Finally, the apparatus 40 reads the first message in the second area through the reading unit 405 to read the address of the first service data in the first message, and reads the first service data through the read address of the first service data. By adopting the embodiment of the application, the device 40 reads the first service data from the first area through the reading unit 405 by reading the address of the first service data from the second area, so that the first service data is read from the first area through the address of the first service data, thereby realizing the sharing of the service data between different processing units for processing the service data in the device 40, reducing the copy between a large number of service data between the processing units, avoiding the transmission between a large number of data at the same time point between processors, and improving the processing efficiency of the data of the system and the throughput rate of the system.
It should be understood that the apparatus 40 is merely one example provided by the embodiments of the present application and that the apparatus 40 may have more or less components than those shown, may combine two or more components, or may have a different configuration of components to implement.
It can be understood that, regarding the specific implementation manner of the functional blocks included in the apparatus 40 of fig. 4, reference may be made to the embodiments described in fig. 1, fig. 2, or fig. 3, which are not described herein again.
Fig. 5 is a schematic structural diagram of a data sharing device provided in the present application. In this embodiment of the application, the Device may include various devices such as a Mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), and an intelligent wearable Device (e.g., a smart watch and a smart bracelet), which is not limited in this embodiment of the application. As shown in fig. 5, the apparatus 50 may include: a baseband chip 501, memory 502 (one or more computer-readable storage media), and a peripheral system 503. These components may communicate over one or more communication buses 504.
The baseband chip 501 may include, but is not limited to: a processor 505 and a processor 506.
The device 50 divides the shared memory into a first area and a second area through the processor 505; the first area is used for storing the service data acquired by the processor 505; the second area is used for storing messages containing addresses of the service data.
The device 50 dynamically applies for a third area in the first area through the processor 505 to store the first service data, and stores a first message containing an address of the first service data in the second area; the first service data is data in the service data; the first message is a message in the second region.
The device 50 reads the first message in the second area through the processor 506 to read the address of the first service data in the first message, and reads the first service data through the read address of the first service data.
The memory 502 is coupled to the processor 505 and the processor 506 and may be used to store various software programs and/or sets of instructions. In particular implementations, memory 502 may include high-speed random access memory and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory 502 may store an operating system (hereinafter referred to simply as a system), such as an embedded operating system like ANDROID, IOS, WINDOWS, or LINUX. The memory 502 may also store a network communication program that may be used to communicate with one or more additional devices, one or more device devices, one or more network devices. The memory 502 may further store a user interface program, which may vividly display the content of the application program through a graphical operation interface, and receive the control operation of the application program from the user through input controls such as menus, dialog boxes, and buttons.
It is to be appreciated that the memory 502 can be utilized to store implementation code that implements the data sharing method.
The memory 502 may also store one or more application programs. As shown in fig. 5, these applications may include: social applications (e.g., Facebook), image management applications (e.g., photo album), map-like applications (e.g., Google map), browsers (e.g., Safari, Google Chrome), and so forth.
The peripheral system 503 is mainly used to implement the interactive function between the user of the device 50 and the external environment, and mainly includes the input and output devices of the device 50. In a specific implementation, the peripheral system 503 may include: a touch screen controller 507, a camera controller 508, and an audio controller 509. Wherein each controller may be coupled to a respective peripheral device (e.g., touch screen 510, camera 511, and audio circuitry 52). In some embodiments, the display screen may be configured with the display screen 1 of a self-capacitive floating touch panel, or may be a touch screen configured with an infrared floating touch panel. In some embodiments, camera 511 may be a 3D camera. It should be noted that the peripheral system 503 may also include other I/O peripherals.
To sum up, in the embodiment of the present application, first, the device 50 divides the shared memory into a first region and a second region through the processor 505; the first area is used for storing the service data acquired by the processor 505; the second area is used for storing messages containing addresses of the service data. Then, the device 50 dynamically applies for a third area in the first area through the processor 505 to store the first service data, and stores a first message containing an address of the first service data in the second area; the first service data is data in the service data; the first message is a message in the second region. Finally, the device 50 reads the first message in the second area through the processor 506 to read the address of the first service data in the first message, and reads the first service data through the read address of the first service data. By adopting the embodiment of the application, the processor 506 reads the first service data from the first area through reading the address of the first service data from the second area, so that the sharing of the service data between the processor 506 and the processor 505 is realized, the copying of a large amount of service data between the processors is reduced, the transmission of a large amount of data between the processors at the same time point is avoided, and the data processing efficiency of the data sharing system and the throughput rate of the system are improved.
A computer-readable storage medium stores a computer program, which is implemented when executed by a processor.
The computer readable storage medium may be an internal storage unit of the device according to any of the foregoing embodiments, for example, a hard disk or a memory of the device. The computer readable storage medium may also be an external storage device of the device, such as a plug-in hard disk provided on the device, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the computer readable storage medium may also include both an internal storage unit and an external storage device of the device. The computer-readable storage medium is used for storing a computer program and other programs and data required by the apparatus. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
The present application also provides a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as set out in the above method embodiments. The computer program product may be a software installation package, the computer comprising electronic equipment.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the components and steps of the various examples are described. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electrical, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present application.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially or partially contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. A method for sharing data, comprising:
the device divides the shared memory into a first area and a second area through a first processor; the first area is used for storing the service data acquired by the first processor; the second area is used for storing a message containing the address of the service data; the message contains a number associated with the message; different messages are associated with different numbers; wherein the first region is disposed in a double rate synchronous dynamic random access memory; the second area is arranged in a static random access memory;
the device dynamically applies for a third area in the first area through the first processor to store first service data, and stores a first message containing an address of the first service data in the second area; the first service data is data in the service data; the first message is a message in the second region; the first message comprises a first number associated with the first message;
the device reads the first message in the second area through a second processor to read the address of the first service data in the first message, and reads the first service data through the read address of the first service data;
the device processes the read first service data through a second processor to obtain a first processing result containing the first number, and writes the first processing result into the second area; the first processing result is used for informing the first processor to release the first service data stored in the cache of the equipment and a first message containing the address of the first service data;
and if the device reads a first processing result containing the first number from the second area through the first processor, releasing the first service data and a first message containing the address of the first service data, which are cached and stored by the device, through the first processor.
2. The method of claim 1, wherein before the device dynamically applies for a third area in the first area for storing first service data through the first processor, further comprising:
the equipment acquires the first service data through the first processor, and writes the first service data into a cache of the equipment through a Map cache mechanism.
3. The method of claim 1, wherein prior to storing the first message including the address of the first service data in the second region, further comprising:
and the equipment encapsulates the address of the first service data into a first message containing the address of the first service data through the first processor, and writes the first message into a cache of the equipment through the first processor by utilizing a Map cache mechanism.
4. The method of claim 3,
the device encapsulates, by the first processor, the address of the first service data into a first message containing the address of the first service data, including:
the device encapsulates, by the first processor, the address of the first traffic data into a first message in a message queue that includes the address of the first traffic data.
5. The method of claim 2 or 3,
if the information cached in the cache of the device reaches the cache capacity of the cache, the first processor releases the information stored in the cache first, and the information includes: the first service data or a first message containing an address of the first service data.
6. The method of claim 2 or 3,
if the information cached in the cache of the device reaches the cache capacity of the cache, the first processor releases the information with the lowest priority in the cache, and the information comprises: the first service data or a first message containing an address of the first service data.
7. A data sharing apparatus, comprising:
an obtaining unit, configured to obtain service data;
the device comprises a dividing unit, a first memory unit and a second memory unit, wherein the dividing unit is used for dividing the shared memory into a first area and a second area; the first area is used for storing the service data acquired by the acquisition unit; the second area is used for storing a message containing the address of the service data; the message contains a number associated with the message; different messages are associated with different numbers; wherein the first region is disposed in a double rate synchronous dynamic random access memory; the second area is arranged in a static random access memory;
the application unit is used for dynamically applying for a third area in the first area;
a storage unit, configured to store first service data and a first message including an address of the first service data in the third area dynamically applied by the application unit; the first service data is data in the service data; the first message is a message in the second region; the first message comprises a first number associated with the first message;
a reading unit, configured to read the first message in the second area, so as to read an address of the first service data in the first message, and read the first service data according to the read address of the first service data;
the writing unit is further configured to write the first processing result into the second area after the read first service data is processed and a first processing result including the first number is obtained; the first processing result is used for informing the first processor to release the first service data stored in the cache of the data sharing device and a first message containing the address of the first service data;
and the releasing unit is used for releasing the first service data stored in the cache and the first message containing the address of the first service data after the first processing result containing the first number is read from the second area.
8. A data sharing device, comprising: an input device, an output device, a memory, and a processor coupled to the memory, the input device, the output device, the processor, and the memory being interconnected, wherein the memory is configured to store application program code and the processor is configured to invoke the program code to perform the data sharing method of claims 1-6.
9. A computer-readable storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the data sharing method according to claims 1-6.
CN201811653675.0A 2018-12-29 2018-12-29 Data sharing method, device, equipment and system Active CN109857573B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811653675.0A CN109857573B (en) 2018-12-29 2018-12-29 Data sharing method, device, equipment and system
PCT/CN2019/121554 WO2020134833A1 (en) 2018-12-29 2019-11-28 Data sharing method, device, equipment and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811653675.0A CN109857573B (en) 2018-12-29 2018-12-29 Data sharing method, device, equipment and system

Publications (2)

Publication Number Publication Date
CN109857573A CN109857573A (en) 2019-06-07
CN109857573B true CN109857573B (en) 2021-03-05

Family

ID=66893771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811653675.0A Active CN109857573B (en) 2018-12-29 2018-12-29 Data sharing method, device, equipment and system

Country Status (2)

Country Link
CN (1) CN109857573B (en)
WO (1) WO2020134833A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109857573B (en) * 2018-12-29 2021-03-05 深圳云天励飞技术有限公司 Data sharing method, device, equipment and system
CN112765085A (en) * 2020-12-29 2021-05-07 紫光展锐(重庆)科技有限公司 Data transmission method and related device
CN114035743B (en) * 2021-10-14 2024-05-14 长沙韶光半导体有限公司 Robot sensing data storage method and related equipment
CN114115732A (en) * 2021-11-10 2022-03-01 深圳Tcl新技术有限公司 Data processing method, device and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001357022A (en) * 2000-06-15 2001-12-26 Nec Corp Device and method for data communications between plural processors
US6397305B1 (en) * 1997-11-13 2002-05-28 Virata Ltd. Method and apparatus for controlling shared memory access
CN1904873A (en) * 2005-07-28 2007-01-31 大唐移动通信设备有限公司 Inter core communication method and apparatus for multi-core processor in embedded real-time operating system
CN101504617A (en) * 2009-03-23 2009-08-12 华为技术有限公司 Data transmitting and receiving method and device based on processor sharing internal memory
CN101853238A (en) * 2010-06-01 2010-10-06 华为技术有限公司 Message communication method and system between communication processors
CN107577539A (en) * 2016-07-05 2018-01-12 阿里巴巴集团控股有限公司 The shared drive structure communicated for kernel state and User space and its application

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4715219B2 (en) * 2005-02-10 2011-07-06 ソニー株式会社 Shared memory device
KR101196566B1 (en) * 2005-05-24 2012-11-01 가부시키가이샤 터보 데이터 라보라토리 Multiprocessor system, and its information processing method
US20070121499A1 (en) * 2005-11-28 2007-05-31 Subhasis Pal Method of and system for physically distributed, logically shared, and data slice-synchronized shared memory switching
CN100377118C (en) * 2006-03-16 2008-03-26 浙江大学 Built-in file system realization based on SRAM
CN101551761A (en) * 2009-04-30 2009-10-07 浪潮电子信息产业股份有限公司 Method for sharing stream memory of heterogeneous multi-processor
CN101976217B (en) * 2010-10-29 2014-06-04 中兴通讯股份有限公司 Anomaly detection method and system for network processing unit
CN102541805A (en) * 2010-12-09 2012-07-04 沈阳高精数控技术有限公司 Multi-processor communication method based on shared memory and realizing device thereof
CN108366111B (en) * 2018-02-06 2020-04-07 西安电子科技大学 Data packet low-delay buffer device and method for switching equipment
CN109857573B (en) * 2018-12-29 2021-03-05 深圳云天励飞技术有限公司 Data sharing method, device, equipment and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6397305B1 (en) * 1997-11-13 2002-05-28 Virata Ltd. Method and apparatus for controlling shared memory access
JP2001357022A (en) * 2000-06-15 2001-12-26 Nec Corp Device and method for data communications between plural processors
CN1904873A (en) * 2005-07-28 2007-01-31 大唐移动通信设备有限公司 Inter core communication method and apparatus for multi-core processor in embedded real-time operating system
CN101504617A (en) * 2009-03-23 2009-08-12 华为技术有限公司 Data transmitting and receiving method and device based on processor sharing internal memory
CN101853238A (en) * 2010-06-01 2010-10-06 华为技术有限公司 Message communication method and system between communication processors
CN107577539A (en) * 2016-07-05 2018-01-12 阿里巴巴集团控股有限公司 The shared drive structure communicated for kernel state and User space and its application

Also Published As

Publication number Publication date
CN109857573A (en) 2019-06-07
WO2020134833A1 (en) 2020-07-02

Similar Documents

Publication Publication Date Title
CN109857573B (en) Data sharing method, device, equipment and system
US20220179531A1 (en) Display Management for Native User Experiences
CN107925749B (en) Method and apparatus for adjusting resolution of electronic device
EP2756481B1 (en) System and method for layering using tile-based renderers
US9448694B2 (en) Graphical user interface for navigating applications
CN107463627B (en) Picture loading method and terminal
US10599336B2 (en) Method of displaying content and electronic device adapted to the same
CN108463799B (en) Flexible display of electronic device and operation method thereof
US9684604B2 (en) Electronic device with cache memory and method of operating the same
US9747007B2 (en) Resizing technique for display content
US9478000B2 (en) Sharing non-page aligned memory
KR102586628B1 (en) Electronic Device AND Memory Management Method Thereof
TW201610848A (en) Integrating operating systems
US10204598B2 (en) Predictive pre-decoding of encoded media item
JP6386099B2 (en) Method, apparatus, computer program, and storage medium for compression support
CN109718554B (en) Real-time rendering method and device and terminal
JP7141589B1 (en) Terminal device, method and program
US9830202B1 (en) Storage and process isolated web widgets
US10275169B2 (en) Shared memory in memory isolated partitions
CN108182656B (en) Image processing method and terminal
TWI556167B (en) System and method for multiple native software applications user interface composition
US10621017B2 (en) Method and device for sharing a disk image between operating systems
CN108255569B (en) Method and device for calling customized hardware by virtual machine
WO2018209466A1 (en) Display control method and terminal
CN115422605A (en) Method, system, terminal and medium for preventing screen capture and screen recording of iOS application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant