Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Fig. 1 shows a flowchart of an implementation of a first depth image denoising method according to an embodiment of the present invention, where the method may be executed by a structured light image acquisition device, and the structured light image acquisition device may be configured in a mobile terminal, and may be implemented by software, or may be implemented by hardware, or may be implemented by both software and hardware. As shown in fig. 1, the depth image denoising method may include the steps of:
step S11: and acquiring three frames of depth images, and determining one frame of depth image as a current frame of depth image.
When the depth image is obtained, the obtained images are matched to generate parallax, and the depth is calculated according to the parallax of the corresponding point, so that the depth image is obtained. However, in a complex surrounding environment, matching errors easily occur, and thus some abnormal values, i.e., noise, are generated. A large number of experiments show that the noise has the following characteristics:
(1) noise is blocky (large noise can reach hundreds of pixels);
(2) generally, the depth image does not exceed 2 frames of depth images, and the situation that the same noise point appears in the continuous 2 frames of depth images may exist in part of block noise points;
(3) noise is only unnaturalness, that is, the depth value of a certain position should be 0, but the value of the position in the depth image is not 0;
(4) the noise value has randomness, the depth value is not fixed, the size is random, the occurrence frequency is random, and the occurrence position is random;
(5) noise generally exists in relatively isolated form, and the depth value of the surrounding pixel where the noise occurs is generally 0.
Based on the above characteristics of the noise point, the present embodiment obtains three frames of depth images, and records them as a first depth image, a second depth image, and a third depth image in sequence, and records the second depth image as a current frame depth image. When noise occurs in any one frame of depth image, the noise generally does not occur continuously in three frames of depth images. The depth image may be acquired by any suitable technique, such as time-of-flight (TOF), structured light, and stereo vision, among others.
Step S12: and comparing the current frame depth image with the adjacent frame depth image to obtain the difference of the depth image data of the two adjacent frames, and marking the region corresponding to the difference as a candidate noise point.
In making the comparison, one way is: comparing the depth information of corresponding pixels in the first depth image and the second depth image, and recording the difference part between the first depth image and the second depth image as diff 1; the depth information of the corresponding pixels in the second depth image and the third depth image are compared, and the difference between the two is denoted as diff 2. The difference portion refers to the disparity of the depth information between the two, which means that the depth information of the corresponding pixel in the first depth image, the second depth image, or the third depth image has a sudden change, and the change of the depth information of the pixel may occur because the depth image generates noise, so the pixels included in diff1 and diff2 may be regarded as noise candidates. Since there is generally no noise that occurs in more than 2 frames of depth images, one is noise that occurs in only one frame of depth image, and the other is noise that occurs in two consecutive frames of depth images, and in either case, the noise information is contained in diff1 or diff 2.
Step S13: and taking the candidate noise point as a center, acquiring a connected region of the candidate noise point, and forming a candidate noise block by the candidate noise point and the connected region thereof.
In this embodiment, the search direction of the connected component may be the neighboring pixels of the noise pixel candidate. Referring to fig. 5, for example, in constructing a candidate noise block in diff1, a candidate noise point in diff1 is first selected as a center pixel, and its neighboring pixels include 8 pixels adjacent to the center pixel, so that the candidate noise block includes 1 candidate noise point pixel at the center and 8 neighboring pixels surrounding the center. Referring to fig. 6, for another example, if the noise candidate is located at the edge of the depth image, in constructing the noise candidate block of the noise candidate in diff1, the noise candidate is first selected as the center pixel, and its neighboring pixels include 5 pixels neighboring to the center pixel, so that the noise candidate block includes 1 noise candidate pixel located at the center and 5 neighboring pixels surrounding the center. Referring to fig. 7, as another example, if the noise candidate is located at one corner (i.e. the intersection of two edges) of the depth image, when constructing the noise candidate block of the noise candidate in diff1, first, the noise candidate is selected as the central pixel, and its neighboring pixels include 3 pixels neighboring to the central pixel, so that the noise candidate block includes 1 noise candidate pixel located at the center and 3 neighboring pixels surrounding the center. When the candidate noise point is located in diff2, the candidate noise block is constructed in the same manner as described above, and will not be described herein again. It should be understood that the search mode of the connected region may be other modes, and is not limited to the above-mentioned case.
In constructing the candidate noise block, consider that diff1 may include multiple candidate noise points that may be adjacent or far apart. When the noise candidates are adjacent, in the process of constructing the noise candidates, the same adjacent pixel may be shared, or one noise candidate may be another adjacent pixel, and at this time, a plurality of noise candidates may be connected to each other to form a larger noise candidate. When the distance between the candidate noise points is long, the constructed candidate noise blocks are independent.
After the candidate noise block is constructed, the candidate noise block needs to be filtered so as to eliminate candidate noise points which are not noise, so that a real noise point can be obtained.
Step S14: and filtering the candidate noise block to obtain a real noise point. Referring to fig. 2, the filtering of the candidate noise block may be performed as follows:
step S141: it is determined whether the number of pixels in the candidate noise block is within a first threshold range.
The first threshold range refers to the number of pixels, that is, the noise block candidates are first filtered by determining whether the number of pixels in the noise block candidates is within a preset range of the number of pixels. When the number of pixels in the candidate noise block is not within the first threshold, it means that the candidate noise block is not a true noise block, and it is not necessary to perform noise reduction processing on the candidate noise block, so the following steps are required:
step S142: the candidate noise block is filtered.
When the number of pixels in the candidate noise block is within the first threshold, it means that the candidate noise block is considered as a noise block, and if the noise reduction processing is required, the following second filtering is also required:
step S143: and judging whether the number proportion of the pixel points with the depth values in the communication area of the candidate noise points in the candidate noise block is lower than a second threshold value.
At this time, it is necessary to calculate depth values of neighboring pixels of the noise candidate in the noise block, and determine which neighboring pixels have depth values and which neighboring pixels have depth values of 0. The more the number of the pixel points with the depth value is, the less the number of the pixel points with the depth value of 0 is. One limiting case is: the depth values of the adjacent pixels of the candidate noise point are all 0, which means that only the depth value of the candidate noise point in the candidate noise block is not 0, and the other pixels are all 0 (at this time, the proportion of the number of the pixels having the depth values in the connected region of the candidate noise point in the selected noise block is the lowest), and the depth value of the candidate noise point can be considered to have a sudden change, and the sudden change is caused by the occurrence of the noise point, so the candidate noise point can be considered to be the real noise point. Another limiting case is: the depth values of the adjacent pixels of the candidate noise point are not 0, which means that the depth values of all the pixels in the candidate noise block are not 0 (at this time, the proportion of the number of the pixels having the depth values in the connected region of the candidate noise point in the selected noise block is the highest), that is, the depth value of the candidate noise point is not mutated, so that the candidate noise point is considered not to be a real noise point, and therefore, the noise reduction is not needed.
In practical situations, it is not only determined whether the candidate noise is true noise or non-true noise if the two limit conditions occur, so that a second threshold may be preset, and the ratio of the number of pixels having depth values in the connected region of the candidate noise in the candidate noise block may be compared with the second threshold to determine whether the ratio is lower than the second threshold. If the ratio of the number of the pixels with the depth value is not lower than the second threshold, it means that the candidate noise point is not the true noise point, and at this time, the following steps need to be performed:
step S144: and filtering the candidate noise point without carrying out noise reduction processing.
If the number proportion of the pixel points with the depth values is lower than the second threshold value, the candidate noise point is a real noise point, then:
step S145: and determining the candidate noise point as a real noise point.
Step S15: and processing the depth data of the real noise point in the current frame depth image so as to remove the real noise point from the current frame depth image.
The real noise may exist in diff1, diff2, or both diff1 and diff2, so when performing noise reduction processing, the real noise data in diff1 and diff2 need to be compared, so as to find the position corresponding to the real noise in the depth image.
When true noise is present only in diff1, it means that there are two cases:
the real noise exists in the first depth image, and neither the second depth image nor the third depth image exists, and at the moment, the real noise is only required to be found in the first depth image, and the depth value of the real noise is set to be zero;
the first depth image does not have the real noise, the second depth image and the third depth image have the real noise, and at this time, the real noise is only required to be found in the second depth image and the third depth image, and the depth value of the real noise is set to be zero.
When true noise is present only in diff2, it means that there are two cases:
the real noise exists in the third depth image, and neither the first depth image nor the second depth image exists, so that the real noise is only required to be found in the third depth image, and the depth value of the real noise is set to be zero;
the third depth image does not have the real noise, the first depth image and the second depth image have the real noise, and at this time, the real noise only needs to be found in the first depth image and the second depth image, and the depth value of the real noise is set to be zero.
When true noise is present in diff1 and diff2, it means that there are two cases:
the real noise exists in the first depth image and the third depth image, the second depth image does not exist, and at the moment, the real noise is only required to be found in the first depth image and the third depth image, and the depth value of the real noise is set to be zero;
the true noise exists in the second depth image, and neither the first depth image nor the third depth image exists, and at this time, the true noise is only required to be found in the second depth image, and the depth value of the true noise is set to zero.
After the depth data of the real noise point in the depth image is set to zero, outputting the depth image of the current frame (namely, the second depth image), and simultaneously updating and storing the depth data of the first depth image and the third depth image.
It should be understood that, in the above denoising process, because the number of noise candidates and noise block candidates to be denoised is plural, the denoising process may be performed in a loop until all noise candidates are finally processed to obtain a denoised depth image (see fig. 8).
The depth image noise reduction method provided by the embodiment of the invention has the beneficial effects that:
at present, when the depth image is subjected to noise reduction, the technical scheme mainly adopted is to perform filtering on a single-frame depth image or perform noise reduction by adopting a noise reduction algorithm, however, the method can only process some salt and pepper noises, but cannot process noise blocks. Some noise reduction algorithms may even smooth the depth data, modifying the original depth data, thereby affecting the depth quality. Moreover, because the denoising processing time in the denoising method is too long, the denoising method is not suitable for denoising under a multi-frame or dynamic environment.
In the embodiment, three frames of depth images are collected for data processing, and the difference between the second frame of depth image and the other two frames of depth images is compared to obtain candidate noise points with abrupt depth change; obtaining candidate noise blocks through the connected region; in order to further improve the noise reduction effect, the noise filtering is carried out twice on the candidate noise block, the candidate noise block appearing in the difference of three frames of depth image data is comprehensively compared, real noise point information is generated, the depth data at the position of the real noise point in the second frame of depth image is processed, meanwhile, the depth image data of the other two frames are updated, noise points can be effectively removed, the effect of eliminating 2 frames of noise point blocks appearing at the same position at most can be achieved, the original shape of the depth image data can be kept, the operations such as smoothing and the like can not be carried out on the depth data, and the technology can be applied to the noise reduction of the depth image under a multi-frame or dynamic environment and has wide application prospects.
Fig. 3 shows a flowchart of an implementation of a second depth image denoising method according to an embodiment of the present invention, where the method may be executed by a structured light image capturing device, and the structured light image capturing device may be configured in a mobile terminal, and may be implemented by software, or may be implemented by hardware, or may be implemented by both software and hardware. As shown in fig. 1, in this embodiment, for a case where noise is not continuously present in two frames of depth images (only one frame of depth image is noisy), the depth image noise reduction method may include the following steps:
step S21: acquiring two frames of depth images, and determining one frame of depth image as a current frame of depth image.
Based on the noise characteristics described above, this embodiment obtains two frames of depth images, sequentially records the two frames of depth images as a first depth image and a second depth image, and records the second depth image as a current frame of depth image. When noise appears in any one frame of depth image, the noise generally does not appear in two frames of depth images continuously. The depth image may be acquired by any suitable technique, such as time-of-flight (TOF), structured light, and stereo vision, among others.
Step S22: and comparing the current frame depth image with the adjacent frame depth image to obtain the difference of the depth image data of the two adjacent frames, and recording the region corresponding to the difference as a candidate noise point.
In the comparison, one way is to compare the depth information of the corresponding pixels in the first depth image and the second depth image, and to mark the difference between the two as diff. The difference part means that the depth information of the two is inconsistent, which means that the depth information of the corresponding pixel point in the first depth image or the second depth image has a sudden change, and the change of the depth information of the pixel point may occur because the depth image generates noise, so that the pixel contained in the diff can be marked as a noise candidate.
Step S23: and taking the candidate noise point as a center, acquiring a connected region of the candidate noise point, and forming a candidate noise block by the candidate noise point and the connected region thereof.
In this embodiment, the search direction of the connected component may be the neighboring pixels of the noise pixel candidate. Referring to fig. 5, for example, when constructing a candidate noise block in diff, first, a candidate noise point in diff is selected as a center pixel, and its neighboring pixels include 8 pixels adjacent to the center pixel, so that the candidate noise block includes 1 candidate noise point pixel at the center and 8 neighboring pixels surrounding the center. Referring to fig. 6, for another example, if the noise candidate is located at the edge of the depth image, in constructing the noise candidate block of the noise candidate in diff1, the noise candidate is first selected as the center pixel, and its neighboring pixels include 5 pixels neighboring to the center pixel, so that the noise candidate block includes 1 noise candidate pixel located at the center and 5 neighboring pixels surrounding the center. Referring to fig. 7, as another example, if the noise candidate is located at one corner (i.e. the intersection of two edges) of the depth image, when constructing the noise candidate block of the noise candidate in diff1, first, the noise candidate is selected as the central pixel, and its neighboring pixels include 3 pixels neighboring to the central pixel, so that the noise candidate block includes 1 noise candidate pixel located at the center and 3 neighboring pixels surrounding the center. It should be understood that the search mode of the connected region may be other modes, and is not limited to the above-mentioned case.
When constructing the candidate noise block, it is considered that diff may include a plurality of candidate noise points, and these candidate noise points may be adjacent or far apart. When the noise candidates are adjacent, in the process of constructing the noise candidates, the same adjacent pixel may be shared, or one noise candidate may be another adjacent pixel, and at this time, a plurality of noise candidates may be connected to each other to form a larger noise candidate. When the distance between the candidate noise points is long, the constructed candidate noise blocks are independent.
After the candidate noise block is constructed, the candidate noise block needs to be filtered so as to eliminate candidate noise points which are not noise, so that a real noise point can be obtained.
Step S24: and filtering the candidate noise block to obtain a real noise point. Referring to fig. 4, filtering the candidate noise block may be performed as follows:
step S241: it is determined whether the number of pixels in the candidate noise block is within a first threshold range.
The first threshold range refers to the number of pixels, that is, the noise block candidates are first filtered by determining whether the number of pixels in the noise block candidates is within a preset range of the number of pixels. When the number of pixels in the candidate noise block is not within the first threshold, it means that the candidate noise block is not a true noise block, and it is not necessary to perform noise reduction processing on the candidate noise block, so the following steps are required:
step S242: the candidate noise block is filtered.
When the number of pixels in the candidate noise block is within the first threshold, it means that the candidate noise block is considered as a noise block, and if the noise reduction processing is required, the following second filtering is also required:
step S243: and judging whether the number proportion of the pixel points with the depth values in the communication area of the candidate noise points in the candidate noise block is lower than a second threshold value.
At this time, it is necessary to calculate depth values of neighboring pixels of the noise candidate in the noise block, and determine which neighboring pixels have depth values and which neighboring pixels have depth values of 0. The more the number of the pixel points with the depth value is, the less the number of the pixel points with the depth value of 0 is. One limiting case is: the depth values of the adjacent pixels of the candidate noise point are all 0, which means that only the depth value of the candidate noise point in the candidate noise block is not 0, and the other pixels are all 0 (at this time, the proportion of the number of the pixels having the depth values in the connected region of the candidate noise point in the selected noise block is the lowest), and the depth value of the candidate noise point can be considered to have a sudden change, and the sudden change is caused by the occurrence of the noise point, so the candidate noise point can be considered to be the real noise point. Another limiting case is: the depth values of the adjacent pixels of the candidate noise point are not 0, which means that the depth values of all the pixels in the candidate noise block are not 0 (at this time, the proportion of the number of the pixels having the depth values in the connected region of the candidate noise point in the selected noise block is the highest), that is, the depth value of the candidate noise point is not mutated, so that the candidate noise point is considered not to be a real noise point, and therefore, the noise reduction is not needed.
In practical situations, it is not only determined whether the candidate noise is true noise or non-true noise if the two limit conditions occur, so that a second threshold may be preset, and the ratio of the number of pixels having depth values in the connected region of the candidate noise in the candidate noise block may be compared with the second threshold to determine whether the ratio is lower than the second threshold. If the ratio of the number of the pixels with the depth value is not lower than the second threshold, it means that the candidate noise point is not the true noise point, and at this time, the following steps need to be performed:
step S244: and filtering the candidate noise point without carrying out noise reduction processing.
If the number proportion of the pixel points with the depth values is lower than the second threshold value, the candidate noise point is a real noise point, then:
step S245: and determining the candidate noise point as a real noise point.
Step S25: and processing the depth data of the real noise point in the current frame depth image so as to remove the real noise point from the current frame depth image.
The real noise may exist in the first depth image or the second depth image, and therefore, when performing the noise reduction processing, a position corresponding to the real noise needs to be found in the depth image.
After the depth data of the real noise point in the depth image is set to zero, the depth image of the current frame (namely, the second depth image) is output, and meanwhile, the depth data of the first depth image is updated and stored.
It should be understood that, in the above-mentioned denoising process, because there are a plurality of noise candidates and noise blocks to be denoised, the above-mentioned denoising process may be performed in a loop until all noise candidates are finally processed, and a denoised depth image is obtained.
The depth image noise reduction method provided by the embodiment of the invention has the beneficial effects that:
in the embodiment, two frames of depth images are collected for data processing, and the difference between the two frames of depth images is compared to obtain candidate noise points with abrupt depth change; obtaining candidate noise blocks through the connected region; in order to further improve the noise reduction effect, noise filtering is performed twice on the candidate noise block to generate real noise point information, the depth data at the position of the real noise point in the second frame depth image is processed, and meanwhile, the other first depth image data is updated, so that the noise points can be effectively removed, the effect of eliminating the noise point blocks at the most at the same position is achieved, the original shape of the depth image data can be reserved, the technology can be applied to the noise reduction of the depth image under a multi-frame or dynamic environment, and the application prospect is wide.
It should be understood that, in other embodiments, the number of the acquired depth image frames may also be 4 frames or more, and only when the noise reduction processing is performed in consideration of acquiring the depth images of 4 frames or more, a situation that the obstacle is recognized as noise may occur in the depth image with the obstacle, and the amount of calculation may also be greatly increased.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Referring to fig. 9, an embodiment of the present invention further provides a depth image denoising apparatus 30, which can be used for implementing the depth image denoising method described above, including a first module 31, a second module 32, a third module 33, a fourth module 34, and a fifth module 35, where the first module 31 is configured to obtain at least two frames of depth images, and determine one of the frames of depth images as a current frame of depth image; the second module 32 is configured to compare the current frame depth image with an adjacent frame depth image, obtain a difference between two adjacent frame depth image data, and record the difference as a candidate noise point; the third module 33 is configured to obtain a connected region of the candidate noise point by taking the candidate noise point as a center, and the candidate noise point and the connected region thereof form a candidate noise block; the fourth module 34 is configured to filter the candidate noise block to obtain a real noise point; the fifth module 35 is configured to process the depth data of the real noise in the current frame depth image, so that the current frame depth image removes the real noise.
The first module 31 includes a camera component, which may be a separate camera from the depth image noise reducer or may be integrated into the depth image noise reducer.
In one embodiment, the number of frames of the image obtained by the first module 31 is two, and the second module 32 compares the depth information of the corresponding pixels in the first depth image and the second depth image when comparing, and records the difference between the two as diff. The difference part means that the depth information of the corresponding pixel point in the first depth image or the second depth image changes suddenly, and the change of the depth information of the pixel point may occur because the depth image generates noise, so that the pixel contained in diff can be marked as a candidate noise.
In one embodiment, the number of the image frames acquired by the first module 31 is three, and the second module 32 compares the depth information of the corresponding pixels in the first depth image and the second depth image when comparing, and records the difference between the two as diff 1; the depth information of the corresponding pixels in the second depth image and the third depth image are compared, and the difference between the two is denoted as diff 2. The difference portion means that the depth information of the corresponding pixel in the first depth image, the second depth image, or the third depth image has a sudden change, and the change of the depth information of the pixel may occur because the depth image generates noise, so the pixels included in diff1 and diff2 may be regarded as noise candidates.
The third module 33 may search for neighboring pixels whose direction is a candidate noisy point pixel when performing a search for a connected component. For example, when constructing a candidate noise block in diff, first, one candidate noise point in diff is selected as a central pixel, and its neighboring pixels include 8 pixels neighboring to the central pixel, so that the candidate noise block includes 1 candidate noise point pixel at the center and 8 neighboring pixels surrounding the center. Of course, the neighboring pixels may have other values, and are not limited to the above.
Referring to fig. 10, the fourth module 34 includes a first filtering unit 341 and a second filtering unit 342, in which the first filtering unit 341 is configured to determine whether the number of pixels in the candidate noise block is within a first threshold range, filter the candidate noise block if the number of pixels in the candidate noise block is not within the first threshold range, and transmit the image data to the second filtering unit 342 if the number of pixels in the candidate noise block is within the first threshold range; the second filtering unit is configured to determine whether a ratio of the number of pixels having depth values in a connected region of the candidate noise point in the candidate noise block is lower than a second threshold, filter the candidate noise point if the ratio is not lower than the second threshold, and if the ratio is lower than the second threshold, indicate that the candidate noise point is a real noise point, and continue to perform noise reduction processing through the fifth module 35.
The fifth module 35 sets the depth value of the pixel corresponding to the real noise point in the depth image to zero, outputs the depth image of the current frame, and updates and stores the depth data of other depth images.
In one embodiment, the depth image noise reduction apparatus further includes an output interface for communicating with an external device. For example, the output interface may include a USB connection, a firewire connection, an ethernet cable connection, or the like, wired connection. In other embodiments, the depth image noise reduction apparatus may communicate with an external device via a wireless connection, such as bluetooth, a WLAN network, etc. For example, the depth image subjected to noise reduction by the image noise reduction device may be output to the outside through the output interface, or may be transmitted to an external device through a wireless connection.
Fig. 11 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 11, the terminal device 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62, such as a depth image noise reduction program, stored in said memory 61 and executable on said processor 60. The processor 60, when executing the computer program 62, implements the steps in the various depth image denoising method embodiments described above, such as the steps S11-S15 shown in fig. 1. Alternatively, the processor 60, when executing the computer program 62, implements the functions of the modules/units in the above-described device embodiments, such as the functions of the modules 31 to 35 shown in fig. 9.
Illustratively, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 62 in the terminal device 6. For example, the computer program 62 may be divided into a synchronization module, a summarization module, an acquisition module, and a return module (a module in a virtual device), each for implementing the above-described functions.
The terminal device 6 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 60, a memory 61. Those skilled in the art will appreciate that fig. 11 is merely an example of a terminal device 6 and does not constitute a limitation of terminal device 6 and may include more or less components than those shown, or some components in combination, or different components, for example, the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. The memory 61 may also be an external storage device of the terminal device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the terminal device 6. The memory 61 is used for storing the computer programs and other programs and data required by the terminal device. The memory 61 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.