CN113870093A - Image caching method and device, electronic equipment and storage medium - Google Patents

Image caching method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113870093A
CN113870093A CN202111145887.XA CN202111145887A CN113870093A CN 113870093 A CN113870093 A CN 113870093A CN 202111145887 A CN202111145887 A CN 202111145887A CN 113870093 A CN113870093 A CN 113870093A
Authority
CN
China
Prior art keywords
images
training
training process
groups
processes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111145887.XA
Other languages
Chinese (zh)
Inventor
王志宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Technology Development Co Ltd
Original Assignee
Shanghai Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Technology Development Co Ltd filed Critical Shanghai Sensetime Technology Development Co Ltd
Priority to CN202111145887.XA priority Critical patent/CN113870093A/en
Publication of CN113870093A publication Critical patent/CN113870093A/en
Priority to PCT/CN2022/074698 priority patent/WO2023050673A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides an image caching method, an image caching device, an electronic device and a storage medium, wherein the method comprises the following steps: reading a group of images by utilizing each training process in a plurality of training processes to obtain a plurality of groups of images; wherein, a plurality of training processes correspond to a plurality of groups of images one by one; applying for a shared memory corresponding to a plurality of groups of images by using a first training process in a plurality of training processes, and sharing the applied shared memory to each training process different from the first training process in the plurality of training processes; and caching a group of images read from the plurality of groups of images into a shared memory by using each training process in the plurality of training processes, so that each training process in the plurality of training processes can read the plurality of groups of images from the shared memory during the period of executing the neural network training step.

Description

Image caching method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to an image caching method and apparatus, an electronic device, and a storage medium.
Background
The training process of the neural network is divided into two parts of data processing and training, wherein the data processing stage mainly comprises two steps of reading pictures from a hard disk and preprocessing the pictures. Generally, to increase the training speed, multiple processes are started on one physical machine to train with multiple video cards at the same time.
At present, when the data of a single picture is large, the time for reading the picture in the data processing stage is very long, so that the training process may use the previous image to train the neural network, while the next image is still being read, the data processing and training cannot be well performed in parallel, the image reading time is too long, and the training efficiency of the neural network is low.
Disclosure of Invention
The embodiment of the disclosure is expected to provide an image caching method and device, an electronic device and a storage medium.
The technical scheme of the embodiment of the disclosure is realized as follows:
the embodiment of the disclosure provides an image caching method, which includes:
reading a group of images by utilizing each training process in a plurality of training processes to obtain a plurality of groups of images; wherein the plurality of training processes correspond to the plurality of sets of images one to one;
applying for a shared memory corresponding to the plurality of groups of images by using a first training process in the plurality of training processes, and sharing the applied shared memory to each training process different from the first training process in the plurality of training processes;
and caching a group of images read from the plurality of groups of images to the shared memory by utilizing each training process in the plurality of training processes, so that each training process in the plurality of training processes can read the plurality of groups of images from the shared memory during the period of executing the neural network training step.
In the above method, the reading a group of images by using each of a plurality of training processes to obtain a plurality of groups of images includes:
reading an image path list of a storage path of each image in the recorded image data set by using a second training process in the plurality of training processes, and broadcasting the image path list to each training process different from the second training process in the plurality of training processes;
and reading a group of images from the image data set by utilizing each training process in the plurality of training processes based on the image path list and according to a corresponding preset image reading strategy to obtain the plurality of groups of images.
In the above method, before applying for the shared memory corresponding to the plurality of groups of images by using a first training process of the plurality of training processes, the method further includes:
calculating the memory size required for supporting caching of a group of images read from the multiple groups of images by utilizing each training process in the multiple training processes to obtain multiple memory sizes corresponding to the multiple groups of images one by one;
summarizing the sizes of the plurality of memories by utilizing the first training process to obtain the size of the whole memory supporting the storage of the plurality of groups of images;
the applying for the shared memory corresponding to the plurality of groups of images by using a first training process of the plurality of training processes includes:
and applying for the shared memory according to the size of the whole memory by utilizing the first training process.
In the above method, after calculating, by using each of the plurality of training processes, a memory size required to support caching of a group of images corresponding to the plurality of groups of images and obtaining a plurality of memory sizes corresponding to the plurality of groups of images one to one, the method further includes:
and summarizing the memory sizes by utilizing each training process in the training processes to obtain the whole memory size.
In the above method, the calculating, by using each of the plurality of training processes, a memory size required to support caching of a group of images read from the plurality of groups of images to obtain a plurality of memory sizes corresponding to the plurality of groups of images one to one includes:
acquiring shape information of a group of images read from the plurality of groups of images by utilizing each training process in the plurality of training processes;
and calculating the memory size required by the group of images which support cache reading according to the shape information of the group of images read from the plurality of groups of images by utilizing each training process in the plurality of training processes to obtain the plurality of memory sizes.
In the above method, after the obtaining, by each of the plurality of training processes, shape information of one of the plurality of sets of images read, the method further includes:
and summarizing the shape information of the multiple groups of images by utilizing each training process in the multiple training processes respectively to obtain an information summarizing result.
An embodiment of the present disclosure provides an image caching apparatus, including:
the reading module is used for reading a group of images by utilizing each training process in a plurality of training processes to obtain a plurality of groups of images; wherein the plurality of training processes correspond to the plurality of sets of images one to one;
the processing module is used for applying for a shared memory corresponding to the plurality of groups of images by using a first training process in the plurality of training processes and sharing the applied shared memory to each training process different from the first training process in the plurality of training processes;
and the cache module is used for caching a group of images read from the plurality of groups of images to the shared memory by utilizing each training process in the plurality of training processes so that each training process in the plurality of training processes can read the plurality of groups of images from the shared memory during the period of executing the neural network training step.
In the above apparatus, the reading module is specifically configured to read, by using a second training process of the plurality of training processes, an image path list that records a storage path of each image in the image data set, and broadcast the image path list to each training process of the plurality of training processes that is different from the second training process; and reading a group of images from the image data set by utilizing each training process in the plurality of training processes based on the image path list and according to a corresponding preset image reading strategy to obtain the plurality of groups of images.
In the above apparatus, the processing module is further configured to calculate, by using each of the plurality of training processes, a memory size required to support caching of a group of images read from the plurality of groups of images, to obtain a plurality of memory sizes corresponding to the plurality of groups of images one to one; summarizing the sizes of the plurality of memories by utilizing the first training process to obtain the size of the whole memory supporting the storage of the plurality of groups of images;
the processing module is specifically configured to apply for the shared memory according to the size of the entire memory by using the first training process.
In the above apparatus, the processing module is further configured to summarize the memory sizes by using each of the training processes, respectively, to obtain the overall memory size.
In the above apparatus, the processing module is specifically configured to acquire shape information of a set of images read from the plurality of sets of images by using each of the plurality of training processes; and calculating the memory size required by the group of images which support cache reading according to the shape information of the group of images read from the plurality of groups of images by utilizing each training process in the plurality of training processes to obtain the plurality of memory sizes.
In the above apparatus, the processing module is further configured to summarize shape information of the plurality of groups of images by using each of the plurality of training processes, respectively, to obtain an information summarization result.
An embodiment of the present disclosure provides an electronic device, including: a processor, a memory, and a communication bus; wherein the content of the first and second substances,
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the image caching method.
The disclosed embodiments provide a computer-readable storage medium storing one or more programs, which may be executed by one or more processors to implement the above-described image caching method.
The embodiment of the disclosure provides an image caching method, an image caching device, electronic equipment and a storage medium, wherein the method comprises the following steps: reading a group of images by utilizing each training process in a plurality of training processes to obtain a plurality of groups of images; wherein, a plurality of training processes correspond to a plurality of groups of images one by one; applying for a shared memory corresponding to a plurality of groups of images by using a first training process in a plurality of training processes, and sharing the applied shared memory to each training process different from the first training process in the plurality of training processes; and caching a group of images read from the plurality of groups of images into a shared memory by using each training process in the plurality of training processes, so that each training process in the plurality of training processes can read the plurality of groups of images from the shared memory during the period of executing the neural network training step. According to the technical scheme provided by the embodiment of the disclosure, the images required to be applied in the neural network training stage are cached in the shared memory in advance by utilizing the training process, so that the image reading speed is increased, and the neural network training efficiency is improved.
Drawings
Fig. 1 is a schematic flowchart of an image caching method according to an embodiment of the present disclosure;
FIG. 2 is a diagram illustrating an exemplary training process buffer image provided by an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an image caching apparatus according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. The following examples are intended to illustrate the present disclosure, but are not intended to limit the scope of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
It should be noted that the terms "first \ second \ third" related to the embodiments of the present disclosure are only used for distinguishing similar objects and do not represent a specific ordering for the objects, and it should be understood that "first \ second \ third" may be interchanged with a preset order or sequence order where permitted so that the embodiments of the present disclosure described herein can be implemented in an order other than that illustrated or described herein.
The embodiment of the present disclosure provides an image caching method, an execution subject of which may be an image caching apparatus, for example, the image caching method may be executed by a terminal device or a server or other electronic devices, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In some possible implementations, the image caching method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
The embodiment of the disclosure provides an image caching method. Fig. 1 is a schematic flowchart of an image caching method according to an embodiment of the present disclosure. As shown in fig. 1, in the embodiment of the present disclosure, the image caching method mainly includes the following steps:
s101, reading a group of images by utilizing each training process in a plurality of training processes to obtain a plurality of groups of images; wherein, a plurality of training processes correspond to a plurality of groups of images one by one.
In an embodiment of the present disclosure, an image caching apparatus reads a plurality of sets of images using a plurality of training processes.
It should be noted that, in the embodiment of the present disclosure, multiple groups of images are actually stored on the hard disk, and the image caching apparatus may read one group of images by using each training process of multiple training processes, that is, multiple training processes correspond to multiple groups of images one to one. The number of the specific training processes and the image read by each training process may be set according to actual requirements and application scenarios, and the embodiment of the present disclosure is not limited.
It should be noted that, in the embodiment of the present disclosure, for each training process of the plurality of training processes, a neural network training step may be performed for training the neural network.
It should be noted that, in the embodiment of the present disclosure, a group of images read by each training process may include one or more frames of images, and the number of images included in each group of images may be set according to actual requirements, and the embodiment of the present disclosure is not limited.
Specifically, in an embodiment of the present disclosure, an image processing apparatus reads a group of images by using each of a plurality of training processes to obtain a plurality of groups of images, including: reading an image path list of a storage path of each image in the recorded image data set by using a second training process in the plurality of training processes, and broadcasting the image path list to each training process different from the second training process in the plurality of training processes; and reading a group of images from the image data set by using each training process in the plurality of training processes based on the image path list and according to a corresponding preset image reading strategy to obtain a plurality of groups of images.
It should be noted that, in the embodiment of the present disclosure, the multiple training processes include a first training process, where the first training process is used to implement application and sharing of the shared memory, and is specifically described in detail in the subsequent steps, and the second training process may be the same training process as the first training process, or of course, may also be any training process different from the first training process in the multiple training processes. The specific second training process may be set according to actual needs and application scenarios, and the embodiment of the present disclosure is not limited.
It can be understood that, in the embodiment of the present disclosure, a storage path of each image in the image data set is recorded in the image path list, and the image caching device reads the image path list by using the second training process and broadcasts the image path list to each training process different from the second training process in the plurality of training processes, so that each training process in the plurality of training processes learns the storage path of each image in the image data set, and thus, the image caching device can read a group of images based on the image path list by using each training process in the plurality of training processes.
It should be noted that, in the embodiment of the present disclosure, each training process in the plurality of training processes is set with a corresponding preset image reading policy, which indicates that the training thread needs to read a group of images from the image data set, that is, actually, each training thread reads a partial image in the image data set. For example, the plurality of training courses are three training courses, which specifically include: the image reading method comprises a training process 1, a training process 2 and a training process 3, wherein a preset image reading strategy corresponding to the training process 1 is to read the first third of images recorded in an image path list, namely, a group of images read corresponding to the training process 1 are the first third of images recorded in the image path list in an image data set, a preset image reading strategy corresponding to the training process 2 is to read the middle third of images recorded in the image path list, and a preset image reading strategy corresponding to the training process 3 is to read the last third of images recorded in the image path list, namely, the three training processes are utilized, and the reading of all the images of the image data set can be realized. The specific preset image reading strategy corresponding to each training process may be set according to actual requirements and application scenarios, and the embodiment of the present disclosure is not limited.
It should be noted that, in the embodiment of the present disclosure, the plurality of sets of images may cover the entire image data set, or may cover only a part of specific images of the image data set, and the embodiment of the present disclosure is not limited.
S102, applying for a shared memory corresponding to a plurality of groups of images by using a first training process in a plurality of training processes, and sharing the applied shared memory to each training process different from the first training process in the plurality of training processes.
In the embodiment of the disclosure, the image caching device applies for the shared memory corresponding to the multiple groups of images by using a first training process in the multiple training processes when the multiple training processes are used for reading the multiple groups of images, and shares the applied shared memory to each training process different from the first training process in the multiple training processes.
It should be noted that, in the embodiment of the present disclosure, the first training process may be any one of a plurality of training processes, or may be a specific one of a plurality of preset training processes, and may be the same training process as the second training process or a different training process from the second training process, which is not limited in the embodiment of the present disclosure.
It should be noted that, in the embodiment of the present disclosure, after applying for the shared memory by using the first training process, the image caching apparatus may share the shared memory to another training process. Specifically, the image caching device may send a descriptor pointing to the shared memory to other training processes by using the first training process, and the other training processes may obtain the specific shared memory according to the descriptor.
Specifically, in the embodiment of the present disclosure, before applying for the shared memory corresponding to the multiple groups of images, the image caching apparatus may execute the following steps by using a first training process of the multiple training processes: calculating the memory size required for supporting caching of a group of images read from a plurality of groups of images by utilizing each training process in a plurality of training processes to obtain a plurality of memory sizes corresponding to the plurality of groups of images one by one; summarizing a plurality of memory sizes by utilizing a first training process to obtain the size of the whole memory supporting the storage of a plurality of groups of images; correspondingly, the image caching device applies for the shared memory corresponding to the plurality of groups of images by using a first training process in the plurality of training processes, and the method comprises the following steps: and applying for sharing the memory according to the size of the whole memory by utilizing a first training process.
It is understood that, in the embodiment of the present disclosure, each training process may calculate the size of the memory required for caching a set of images when reading the set of images, that is, the size of the image data amount of the set of images.
It should be noted that, in the embodiment of the present disclosure, communication interaction may be performed between multiple training processes, and each training process different from the first training process may notify the first training process of the calculated memory size of a group of images supporting cache reading, and the first training process may summarize multiple memory sizes corresponding to multiple groups of images, so as to obtain an overall memory size supporting storage of multiple groups of images, and then apply for a shared memory matching the overall memory size, that is, a shared memory corresponding to multiple groups of images. The size of the whole memory is actually the size of the image data volume of the plurality of groups of images.
Specifically, in the embodiment of the present disclosure, the image caching apparatus calculates, by using each training process in the multiple training processes, a memory size required to support caching of a group of images read from the multiple groups of images, to obtain multiple memory sizes corresponding to the multiple groups of images one to one, including: acquiring shape information of a group of images read from a plurality of groups of images by utilizing each training process in a plurality of training processes; and calculating the memory size required by the group of images supporting cache reading according to the shape information of the group of images read from the plurality of groups of images by utilizing each training process in the plurality of training processes to obtain a plurality of memory sizes.
It should be noted that, in the embodiment of the present disclosure, the image caching apparatus may obtain shape information of a set of read images by using each of a plurality of training processes, where for a frame of image, the shape information may include information such as a data type of the image, and each training process may calculate a memory size required for a set of images that support caching and reading according to the shape information of the corresponding set of images.
Specifically, in the embodiment of the present disclosure, the image caching apparatus, using each training process of the multiple training processes, calculates a memory size required for supporting caching of a group of images read from the multiple groups of images, and after obtaining multiple memory sizes corresponding to the multiple groups of images one to one, may further perform the following steps: and summarizing a plurality of memory sizes by utilizing each training process in the plurality of training processes to obtain the size of the whole memory.
It should be noted that, in the embodiment of the present disclosure, in a case that the image caching apparatus calculates the memory size required by a group of images supporting cache reading by using each training process, each training process may be further used to summarize a plurality of memory sizes, so that each training process may obtain the sizes of a plurality of groups of images.
Specifically, in the embodiment of the present disclosure, after the image caching apparatus acquires the shape information of the group of images read from the plurality of groups of images by using each of the plurality of training processes, the following steps may be further performed: and summarizing the shape information of the multiple groups of images by utilizing each training process in the multiple training processes to obtain an information summarizing result.
It should be noted that, in the embodiment of the present disclosure, in the case that each training process is used to acquire shape information of a set of read images, the image caching apparatus may further respectively use each training process to summarize shape information of multiple sets of images, so that each training process may acquire shape information of multiple sets of images, and in this way, each training process may select a specific image based on the acquired shape information and then read the specific image when subsequently reading images from the shared memory.
S103, caching a group of images read from the multiple groups of images into a shared memory by using each training process in the multiple training processes, so that each training process in the multiple training processes can read the multiple groups of images from the shared memory during the period of executing the neural network training step.
In the embodiment of the disclosure, after the shared memory is shared to other training processes in the multiple training processes by using the first training process, that is, the multiple training processes all acquire the shared memory, so that a group of images read from the multiple groups of images can be cached to the shared memory by using each training process in the multiple training processes.
It can be understood that, in the embodiment of the present disclosure, the image caching apparatus respectively caches, to the shared memory, one set of images read from the multiple sets of images by using each training process of the multiple training processes, and actually, all of the multiple sets of images are cached to the shared memory. For each training process, the shared memory for caching a plurality of groups of images is obtained, so that during the subsequent execution of the neural network training step, images can be read from the shared memory, and particularly any image in the plurality of groups of images cached in the shared memory can be read for neural network training, so that the image reading speed is increased, and correspondingly, the neural network training speed is increased.
It should be noted that, in the embodiment of the present disclosure, the image caching method is preferably applied to neural network training, and a single frame of image data used for training is larger, but the total image data of the image is smaller, that is, the image is applied to an application scenario with a smaller number of images.
Fig. 2 is a schematic diagram of an exemplary training process buffer image according to an embodiment of the present disclosure. As shown in fig. 2, training process 1 reads the image path list and then broadcasts this path list to training process 2 and training process 3. And each training process determines a group of images which need to be read according to the image path list, and reads a corresponding group of pictures from the hard disk. And each training process calculates the memory size required for caching the pictures of the process according to the read pictures, and then summarizes the pictures, so that all the training processes know the size of the whole image data set. And the training process 1 applies for a corresponding shared memory according to the size of the summarized whole memory, and shares the shared memory to other processes. Finally, each process caches the previously read picture to the corresponding position of the shared memory, and each subsequent process can read the image of the shared memory in the training process.
The embodiment of the disclosure provides an image caching method, which includes: reading a group of images by utilizing each training process in a plurality of training processes to obtain a plurality of groups of images; wherein, a plurality of training processes correspond to a plurality of groups of images one by one; applying for a shared memory corresponding to a plurality of groups of images by using a first training process in a plurality of training processes, and sharing the applied shared memory to each training process different from the first training process in the plurality of training processes; and caching a group of images read from the plurality of groups of images into a shared memory by using each training process in the plurality of training processes, so that each training process in the plurality of training processes can read the plurality of groups of images from the shared memory during the period of executing the neural network training step. According to the image caching method provided by the embodiment of the disclosure, the images required to be applied in the neural network training stage are cached in the shared memory in advance by utilizing the training process, so that the image reading speed is increased, and the neural network training efficiency is improved.
The embodiment of the disclosure provides an image caching device. Fig. 3 is a schematic structural diagram of an image caching apparatus according to an embodiment of the present disclosure. As shown in fig. 3, the image buffering apparatus includes:
a reading module 301, configured to read a group of images by using each training process in multiple training processes to obtain multiple groups of images; wherein the plurality of training processes correspond to the plurality of sets of images one to one;
a processing module 302, configured to apply for a shared memory corresponding to the multiple groups of images by using a first training process in the multiple training processes, and share the applied shared memory to each training process different from the first training process in the multiple training processes;
a caching module 303, configured to cache, by using each training process in the multiple training processes, a set of images read from the multiple sets of images in the shared memory, so that each training process in the multiple training processes reads the multiple sets of images from the shared memory during the period of performing the neural network training step.
In an embodiment of the present disclosure, the reading module 301 is specifically configured to, by using a second training process in the plurality of training processes, read an image path list that records a storage path of each image in the image data set, and broadcast the image path list to each training process different from the second training process in the plurality of training processes; and reading a group of images from the image data set by utilizing each training process in the plurality of training processes based on the image path list and according to a corresponding preset image reading strategy to obtain the plurality of groups of images.
In an embodiment of the present disclosure, the processing module 302 is further configured to calculate, by using each training process of the multiple training processes, a memory size required to support caching of a group of images read from the multiple groups of images, to obtain multiple memory sizes corresponding to the multiple groups of images one to one; summarizing the sizes of the plurality of memories by utilizing the first training process to obtain the size of the whole memory supporting the storage of the plurality of groups of images;
the processing module 302 is specifically configured to apply for the shared memory according to the size of the entire memory by using the first training process.
In an embodiment of the present disclosure, the processing module 302 is further configured to use each of the plurality of training processes to summarize the plurality of memory sizes, so as to obtain the overall memory size.
In an embodiment of the present disclosure, the processing module 302 is specifically configured to acquire shape information of a set of images read from the multiple sets of images by using each training process of the multiple training processes; and calculating the memory size required by the group of images which support cache reading according to the shape information of the group of images read from the plurality of groups of images by utilizing each training process in the plurality of training processes to obtain the plurality of memory sizes.
In an embodiment of the present disclosure, the processing module 302 is further configured to summarize shape information of the multiple groups of images by using each training process of the multiple training processes, respectively, to obtain an information summarizing result.
The embodiment of the disclosure provides an electronic device. Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 4, in an embodiment of the present disclosure, an electronic apparatus includes: a processor 401, a memory 402, and a communication bus 403; wherein the content of the first and second substances,
the communication bus 403 is used for realizing connection communication between the processor 401 and the memory 402;
the processor 401 is configured to execute one or more programs stored in the memory 402 according to the image caching method.
The disclosed embodiments provide a computer-readable storage medium storing one or more programs, which may be executed by one or more processors to implement the above-described image caching method. The computer-readable storage medium may be a volatile Memory (volatile Memory), such as a Random-Access Memory (RAM); or a non-volatile Memory (non-volatile Memory), such as a Read-Only Memory (ROM), a flash Memory (flash Memory), a Hard Disk (Hard Disk Drive, HDD) or a Solid-State Drive (SSD); or may be a respective device, such as a mobile phone, computer, tablet device, personal digital assistant, etc., that includes one or any combination of the above-mentioned memories.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable signal processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable signal processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable signal processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable signal processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only for the preferred embodiment of the present disclosure, and is not intended to limit the scope of the present disclosure.

Claims (14)

1. An image caching method, comprising:
reading a group of images by utilizing each training process in a plurality of training processes to obtain a plurality of groups of images; wherein the plurality of training processes correspond to the plurality of sets of images one to one;
applying for a shared memory corresponding to the plurality of groups of images by using a first training process in the plurality of training processes, and sharing the applied shared memory to each training process different from the first training process in the plurality of training processes;
and caching a group of images read from the plurality of groups of images to the shared memory by utilizing each training process in the plurality of training processes, so that each training process in the plurality of training processes can read the plurality of groups of images from the shared memory during the period of executing the neural network training step.
2. The method of claim 1, wherein reading a set of images with each of a plurality of training sessions to obtain a plurality of sets of images comprises:
reading an image path list of a storage path of each image in the recorded image data set by using a second training process in the plurality of training processes, and broadcasting the image path list to each training process different from the second training process in the plurality of training processes;
and reading a group of images from the image data set by utilizing each training process in the plurality of training processes based on the image path list and according to a corresponding preset image reading strategy to obtain the plurality of groups of images.
3. The method of claim 1, wherein before applying for the shared memory corresponding to the plurality of sets of images using a first training process of the plurality of training processes, the method further comprises:
calculating the memory size required for supporting caching of a group of images read from the multiple groups of images by utilizing each training process in the multiple training processes to obtain multiple memory sizes corresponding to the multiple groups of images one by one;
summarizing the sizes of the plurality of memories by utilizing the first training process to obtain the size of the whole memory supporting the storage of the plurality of groups of images;
the applying for the shared memory corresponding to the plurality of groups of images by using a first training process of the plurality of training processes includes:
and applying for the shared memory according to the size of the whole memory by utilizing the first training process.
4. The method of claim 3, wherein after calculating, with each of the plurality of training processes, a memory size required to support caching of a corresponding one of the plurality of sets of images to obtain a plurality of memory sizes corresponding one-to-one to the plurality of sets of images, the method further comprises:
and summarizing the memory sizes by utilizing each training process in the training processes to obtain the whole memory size.
5. The method of claim 3, wherein the calculating, with each of the plurality of training processes, a memory size required to support caching of a set of images read from the plurality of sets of images to obtain a plurality of memory sizes corresponding to the plurality of sets of images one to one comprises:
acquiring shape information of a group of images read from the plurality of groups of images by utilizing each training process in the plurality of training processes;
and calculating the memory size required by the group of images which support cache reading according to the shape information of the group of images read from the plurality of groups of images by utilizing each training process in the plurality of training processes to obtain the plurality of memory sizes.
6. The method of claim 5, wherein after the obtaining shape information of the read one of the plurality of sets of images using each of the plurality of training sessions, the method further comprises:
and summarizing the shape information of the multiple groups of images by utilizing each training process in the multiple training processes respectively to obtain an information summarizing result.
7. An image buffering apparatus, comprising:
the reading module is used for reading a group of images by utilizing each training process in a plurality of training processes to obtain a plurality of groups of images; wherein the plurality of training processes correspond to the plurality of sets of images one to one;
the processing module is used for applying for a shared memory corresponding to the plurality of groups of images by using a first training process in the plurality of training processes and sharing the applied shared memory to each training process different from the first training process in the plurality of training processes;
and the cache module is used for caching a group of images read from the plurality of groups of images to the shared memory by utilizing each training process in the plurality of training processes so that each training process in the plurality of training processes can read the plurality of groups of images from the shared memory during the period of executing the neural network training step.
8. The apparatus of claim 7,
the reading module is specifically configured to read, by using a second training process of the plurality of training processes, an image path list that records a storage path of each image in the image data set, and broadcast the image path list to each training process of the plurality of training processes that is different from the second training process; and reading a group of images from the image data set by utilizing each training process in the plurality of training processes based on the image path list and according to a corresponding preset image reading strategy to obtain the plurality of groups of images.
9. The apparatus of claim 7,
the processing module is further configured to calculate, by using each of the plurality of training processes, a memory size required to support caching of a group of images read from the plurality of groups of images, and obtain a plurality of memory sizes corresponding to the plurality of groups of images one to one; summarizing the sizes of the plurality of memories by utilizing the first training process to obtain the size of the whole memory supporting the storage of the plurality of groups of images;
the processing module is specifically configured to apply for the shared memory according to the size of the entire memory by using the first training process.
10. The apparatus of claim 9,
the processing module is further configured to summarize the memory sizes by using each of the training processes, so as to obtain the total memory size.
11. The apparatus of claim 9,
the processing module is specifically configured to acquire shape information of a set of images read from the plurality of sets of images by using each of the plurality of training processes; and calculating the memory size required by the group of images which support cache reading according to the shape information of the group of images read from the plurality of groups of images by utilizing each training process in the plurality of training processes to obtain the plurality of memory sizes.
12. The apparatus of claim 11,
the processing module is further configured to summarize shape information of the plurality of groups of images by using each of the plurality of training processes, respectively, to obtain an information summarization result.
13. An electronic device, characterized in that the electronic device comprises: a processor, a memory, and a communication bus; wherein the content of the first and second substances,
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the image caching method of any one of claims 1 to 6.
14. A computer-readable storage medium storing one or more programs which are executable by one or more processors to implement the image caching method of any one of claims 1 to 6.
CN202111145887.XA 2021-09-28 2021-09-28 Image caching method and device, electronic equipment and storage medium Pending CN113870093A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111145887.XA CN113870093A (en) 2021-09-28 2021-09-28 Image caching method and device, electronic equipment and storage medium
PCT/CN2022/074698 WO2023050673A1 (en) 2021-09-28 2022-01-28 Image caching method and apparatus, and electronic device, storage medium and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111145887.XA CN113870093A (en) 2021-09-28 2021-09-28 Image caching method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113870093A true CN113870093A (en) 2021-12-31

Family

ID=78992114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111145887.XA Pending CN113870093A (en) 2021-09-28 2021-09-28 Image caching method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113870093A (en)
WO (1) WO2023050673A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023050673A1 (en) * 2021-09-28 2023-04-06 上海商汤智能科技有限公司 Image caching method and apparatus, and electronic device, storage medium and computer program product

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009008B (en) * 2016-10-28 2022-08-09 北京市商汤科技开发有限公司 Data processing method and system and electronic equipment
WO2021134229A1 (en) * 2019-12-30 2021-07-08 深圳市欢太科技有限公司 Text identification method, device, storage medium, and electronic apparatus
US11551145B2 (en) * 2020-02-05 2023-01-10 International Business Machines Corporation Performance based switching of a model training process
CN111367687A (en) * 2020-02-28 2020-07-03 罗普特科技集团股份有限公司 Inter-process data communication method and device
CN113870093A (en) * 2021-09-28 2021-12-31 上海商汤科技开发有限公司 Image caching method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023050673A1 (en) * 2021-09-28 2023-04-06 上海商汤智能科技有限公司 Image caching method and apparatus, and electronic device, storage medium and computer program product

Also Published As

Publication number Publication date
WO2023050673A1 (en) 2023-04-06

Similar Documents

Publication Publication Date Title
US9961398B2 (en) Method and device for switching video streams
CN108683826B (en) Video data processing method, video data processing device, computer equipment and storage medium
CN104572278B (en) The method, device and equipment of light application calling local side ability
US10929460B2 (en) Method and apparatus for storing resource and electronic device
CN113302928B (en) System and method for transmitting multiple video streams
CN110968391A (en) Screenshot method, screenshot device, terminal equipment and storage medium
CN114513506A (en) Service processing method, access edge cloud server and service processing system
CN113870093A (en) Image caching method and device, electronic equipment and storage medium
CN110493661B (en) Video file processing method and server
CN108230487A (en) The method and apparatus of shared camera resource
CN111510761B (en) First frame equalization current limiting method and device, computer equipment and readable storage medium
CN112714338B (en) Video transmission method, video playing method, video transmission device, video playing device, computer equipment and storage medium
CN113096218A (en) Dynamic image playing method, device, storage medium and computer equipment
CN111294500B (en) Image shooting method, terminal device and medium
CN109803153B (en) Live video whiteboard drawing method and device
CN108268254B (en) Flash file function library calling method and device, electronic equipment and medium
CN110428453B (en) Data processing method, data processing device, data processing equipment and storage medium
CN110290517B (en) Digital media wireless wifi communication point reading system and method
CN109640023B (en) Video recording method, device, server and storage medium
CN109640170B (en) Speed processing method of self-shooting video, terminal and storage medium
CN112423099A (en) Video loading method and device and electronic equipment
CN113691865A (en) Multimedia playing method and system
CN109242763B (en) Picture processing method, picture processing device and terminal equipment
CN109361956B (en) Time-based video cropping methods and related products
CN116385597B (en) Text mapping method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40057511

Country of ref document: HK