CN113487641A - Image edge detection method and device based on STT-MRAM - Google Patents

Image edge detection method and device based on STT-MRAM Download PDF

Info

Publication number
CN113487641A
CN113487641A CN202110886678.4A CN202110886678A CN113487641A CN 113487641 A CN113487641 A CN 113487641A CN 202110886678 A CN202110886678 A CN 202110886678A CN 113487641 A CN113487641 A CN 113487641A
Authority
CN
China
Prior art keywords
color space
image
edge detection
source image
processed image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110886678.4A
Other languages
Chinese (zh)
Inventor
李月婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haicun Microelectronics Co ltd
Original Assignee
Zhizhen Storage Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhizhen Storage Beijing Technology Co ltd filed Critical Zhizhen Storage Beijing Technology Co ltd
Priority to CN202110886678.4A priority Critical patent/CN113487641A/en
Publication of CN113487641A publication Critical patent/CN113487641A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Abstract

The invention discloses an image edge detection method and device based on STT-MRAM, and relates to the technical field of computer storage equipment. The method comprises the following steps: storing a source image acquired in real time into an STT-MRAM memory; converting the source image from an original color space to a target color space; performing image processing operation on the source image converted into the target color space, performing multidirectional edge detection by using a Sobel algorithm, and outputting a processed image; converting the processed image from the target color space to the original color space; judging the abrasion degree of the processed image by comparing the source image with the processed image converted into the original color space; if the abrasion degree meets a preset condition, outputting the processing image; and if the abrasion degree does not meet the preset condition, re-executing the image processing operation.

Description

Image edge detection method and device based on STT-MRAM
Technical Field
The disclosure relates to the technical field of computer storage equipment, in particular to an image edge detection method and device based on STT-MRAM.
Background
In recent years, with the mutual fusion of software and hardware technologies, the innovative development of products and the iterative updating speed of the products are accelerated. In many fields, edge detection techniques for image data are one of the key challenging tasks that the system platform must address. The image edge detection algorithm needs a system with real-time processing capability, and extracts the change data in target images in different frames in real time to complete important tasks such as image segmentation, identification, image positioning and the like; and the edge detection extracts image points with obvious changes according to the image data characteristics, and is used in the fields of image processing and computer vision.
At present, the image processing platform mainly carries out system application through a DSP (digital signal processor) and an ARM (advanced RISC machines) system, but the system cannot meet the requirements of real-time performance and storage cache of the image processing platform. The edge detection algorithm applied to image processing has a plurality of detection modes: the Canny edge detection algorithm has a single edge detection standard, can accurately position and inhibit noise, but cannot adapt to a threshold; the Roberts edge detection algorithm does not easily detect the oblique edges of the image data; the LOG edge detection algorithm can cause edge modes of different degrees according to the size of the space scale factor; the precision is lower when the Prewitt edge detection algorithm locates the image edge; the traditional algorithm has the defects of inaccurate edge positioning, high noise, image loss and the like; moreover, most images processed by the edge detection algorithm are directly output, which causes the result of the edge detection algorithm to generate an application error of a corresponding function.
Disclosure of Invention
Aiming at the technical problems in the prior art, the embodiment of the disclosure provides an image edge detection method and device based on STT-MRAM, which can solve the problems in the prior art, such as easy data loss, inaccurate edge positioning, high noise, image loss and the like.
A first aspect of an embodiment of the present disclosure provides an STT-MRAM-based image edge detection method, including:
storing a source image acquired in real time into an STT-MRAM memory;
converting the source image from an original color space to a target color space;
performing image processing operation on the source image converted into the target color space, performing multidirectional edge detection by using a Sobel algorithm, and outputting a processed image;
converting the processed image from the target color space to the original color space;
judging the abrasion degree of the processed image by comparing the source image with the processed image converted into the original color space; if the abrasion degree meets a preset condition, outputting the processing image; and if the abrasion degree does not meet the preset condition, re-executing the image processing operation.
In some implementations, converting the source image from an original color space to a target color space specifically includes:
and converting the source image from the original color space to a second color space, and then converting the source image from the second color space to a target color space.
In some implementations, the method further comprises, after converting the source image from an original color space to a second color space: completing a spatial co-range mapping between the original color space and the second color space on the source image.
In some implementations, the raw color space is specifically an RGB color space; the second color space is specifically an XYZ color space; the target color space is specifically a Lab color space.
In some implementations, the performing image processing operations specifically includes: and calculating the color distance between pixel points in the source image converted into the target color space, and calculating a gradient operator space according to the color distance and the gradient value corresponding to the filter.
In some implementations, the performing multi-directional edge detection using the Sobel algorithm specifically includes: and dividing the source image converted into the target color space into one or more gradient directions by taking a preset angle as a direction interval, and performing pixel data processing one by one to output a processed image.
In some implementations, determining the degree of wear of the processed image by comparing the source image with the processed image transformed to the original color space includes: and calculating index data of the source image and the processed image converted into the original color space, and judging the abrasion degree of the processed image according to the index data.
In some implementations, the metric data specifically includes a metric value and a covariance value.
In some implementations, determining the degree of wear of the processed image from the metric data specifically includes: the smaller the absolute value of the index value and the covariance value difference is, the less the degree of wear of the processed image is.
A second aspect of embodiments of the present disclosure provides an image edge detection apparatus based on STT-MRAM, the apparatus including:
the real-time acquisition module is used for storing the source image acquired in real time into the STT-MRAM memory;
the first color space conversion module is used for converting the source image from an original color space to a target color space;
the multidirectional edge detection module is used for executing image processing operation on the source image converted into the target color space, executing multidirectional edge detection by using a Sobel algorithm and outputting a processed image;
a second color space conversion module for converting the processed image from the target color space to the original color space;
the abrasion degree judging module is used for judging the abrasion degree of the processed image by comparing the source image with the processed image converted into the original color space; if the abrasion degree meets a preset condition, outputting the processing image; and if the abrasion degree does not meet the preset condition, re-executing the image processing operation.
The beneficial effects of the embodiment of the disclosure are: the work of refreshing and maintaining internal data by reducing the stored electric quantity by using the nonvolatile memory is adopted, so that the power consumption of the system is saved; the cache work of the edge detection system is improved, and the problem of system data reading delay is reduced; by converting the color space, the precision and accuracy of the edge detection algorithm are improved.
Drawings
The features and advantages of the present disclosure will be more clearly understood by reference to the accompanying drawings, which are illustrative and not to be construed as limiting the disclosure in any way, and in which:
FIG. 1 is a diagram illustrating a real-time image acquisition and edge detection system according to some embodiments of the present disclosure;
FIG. 2 is a flow chart of a STT-MRAM based image edge detection method according to some embodiments of the present disclosure;
FIG. 3 is an STT-MRAM based image edge detection device, shown in accordance with some embodiments of the present disclosure.
Detailed Description
In the following detailed description, numerous specific details of the disclosure are set forth by way of examples in order to provide a thorough understanding of the relevant disclosure. However, it will be apparent to one of ordinary skill in the art that the present disclosure may be practiced without these specific details. It should be understood that the use of the terms "system," "apparatus," "unit" and/or "module" in this disclosure is a method for distinguishing between different components, elements, portions or assemblies at different levels of sequence. However, these terms may be replaced by other expressions if they can achieve the same purpose.
It will be understood that when a device, unit or module is referred to as being "on" … … "," connected to "or" coupled to "another device, unit or module, it can be directly on, connected or coupled to or in communication with the other device, unit or module, or intervening devices, units or modules may be present, unless the context clearly dictates otherwise. For example, as used in this disclosure, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present disclosure. As used in the specification and claims of this disclosure, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" are intended to cover only the explicitly identified features, integers, steps, operations, elements, and/or components, but not to constitute an exclusive list of such features, integers, steps, operations, elements, and/or components.
These and other features and characteristics of the present disclosure, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will be better understood by reference to the following description and drawings, which form a part of this specification. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the disclosure. It will be understood that the figures are not drawn to scale.
Various block diagrams are used in this disclosure to illustrate various variations of embodiments according to the disclosure. It should be understood that the foregoing and following structures are not intended to limit the present disclosure. The protection scope of the present disclosure is subject to the claims.
In recent years, with the mutual fusion of software and hardware technologies, the innovative development of products and the iterative updating speed of the products are accelerated. In many fields, edge detection techniques for image data are one of the key challenging tasks that the system platform must address. The image edge detection algorithm needs a system with real-time processing capability, and extracts the change data in target images in different frames in real time to complete important tasks such as image segmentation, identification, image positioning and the like; and the edge detection extracts image points with obvious changes according to the image data characteristics, and is used in the fields of image processing and computer vision.
At present, the image processing platform mainly carries out system application through a DSP (digital signal processor) and an ARM (advanced RISC machines) system, but the system cannot meet the requirements of real-time performance and storage cache of the image processing platform. The edge detection algorithm applied to image processing has a plurality of detection modes: the Canny edge detection algorithm has a single edge detection standard, can accurately position and inhibit noise, but cannot adapt to a threshold; the Roberts edge detection algorithm does not easily detect the oblique edges of the image data; the LOG edge detection algorithm can cause edge modes of different degrees according to the size of the space scale factor; the precision is lower when the Prewitt edge detection algorithm locates the image edge; the traditional algorithm has the defects of inaccurate edge positioning, high noise, image loss and the like; moreover, most images processed by the edge detection algorithm are directly output, which causes the result of the edge detection algorithm to generate an application error of a corresponding function.
In order to solve the above problems, an embodiment of the present disclosure discloses a real-time image acquisition and edge detection system, a general framework of the system is shown in fig. 1, a main control chip is XC7S50, a cache is ST-DDR3, a start file and an application program of the system are stored in QSPI Flash, a 50Mhz crystal oscillator provides a stable clock for the whole system, and an HDMI output interface includes 3-way data and 1-way clock. The camera is connected to the main control chip through an 18-pin CMOS camera interface. The JTAG download interface is adopted for system debugging and updating. The key and the LED lamp are connected to the I/O port, and low level triggering is effective. And the 5V power supply of the system is converted into 3.3V power supply to supply power to each module of the system.
The implementation of the present disclosure further discloses an image edge detection method based on STT-MRAM, specifically as shown in fig. 2, the method includes:
s101, storing a source image acquired in real time into an STT-MRAM memory;
s102, converting the source image from an original color space to a target color space;
s103, performing image processing operation on the source image converted into the target color space, performing multidirectional edge detection by using a Sobel algorithm, and outputting a processed image;
s104, converting the processed image from the target color space to the original color space;
s105, judging the abrasion degree of the processed image by comparing the source image with the processed image converted into the original color space; if the abrasion degree meets a preset condition, outputting the processing image; and if the abrasion degree does not meet the preset condition, re-executing the image processing operation.
In some implementations, converting the source image from an original color space to a target color space specifically includes:
and converting the source image from the original color space to a second color space, and then converting the source image from the second color space to a target color space.
In some implementations, the method further comprises, after converting the source image from an original color space to a second color space: completing a spatial co-range mapping between the original color space and the second color space on the source image.
In some implementations, the raw color space is specifically an RGB color space; the second color space is specifically an XYZ color space; the target color space is specifically a Lab color space.
In some implementations, the performing image processing operations specifically includes: and calculating the color distance between pixel points in the source image converted into the target color space, and calculating a gradient operator space according to the color distance and the gradient value corresponding to the filter.
In some implementations, the performing multi-directional edge detection using the Sobel algorithm specifically includes: and dividing the source image converted into the target color space into one or more gradient directions by taking a preset angle as a direction interval, and performing pixel data processing one by one to output a processed image.
In some implementations, determining the degree of wear of the processed image by comparing the source image with the processed image transformed to the original color space includes: and calculating index data of the source image and the processed image converted into the original color space, and judging the abrasion degree of the processed image according to the index data.
In some implementations, the metric data specifically includes a metric value and a covariance value.
In some implementations, determining the degree of wear of the processed image from the metric data specifically includes: the smaller the absolute value of the index value and the covariance value difference is, the less the degree of wear of the processed image is.
In the implementation of the disclosure, the Sobel multidirectional edge detection algorithm is operated in the Lab color space, the processed image data is output, and then the processed image data is converted into the RGB color space through the XYZ color space, so that the precision and accuracy of the edge detection algorithm are improved.
Higher-precision pixel data can be obtained in Lab color space, generally, L represents the brightness of a pixel, and the value range is [0,100 ]; a represents a red pixel to a green pixel, and the value range is [127, -128 ]; b represents a yellow to blue pixel, and the value range is [127, -128 ]. In order to obtain an accurate edge model, RGB color space data collected by a camera needs to be converted into an XYZ color space and then into a Lab color space.
The implementation of the present disclosure further discloses an image edge detection method based on STT-MRAM, which specifically includes:
s201: storing source image data acquired by a CMOS image sensor in an STT-MRAM memory, and caching the acquired image data in real time;
s202, aiming at the collected source image (i, j), extracting according to an RGB color space to obtain source image components of R (i, j), G (i, j) and B (i, j), and converting the RGB color space into an XYZ color space;
Figure BDA0003194405610000081
and S203, carrying out complete space equal range mapping on the source image data to finish the linear symmetry of the two color spaces.
Specifically, the sum of coefficients of X is 0.950456, the sum of coefficients of Y is 1.0, and the sum of coefficients of Z is 1.088754, as shown in equation (3), the equivalent range mapping of the RGB space and the XYZ space is completed.
Figure BDA0003194405610000082
S204, converting pixel points in an XYZ color space into Lab color space pixel point functions, wherein the color coordinates of the XYZ color space are respectively X (i, j), Y (i, j) and Z (i, j), and the coordinate components converted into the Lab color space are respectively L (i, j), a (i, j) and b (i, j);
specifically, the pixel point in XYZ color space is converted into a Lab color space pixel point function by formula (4), and the following function formula (4) is set according to different value ranges:
Figure BDA0003194405610000083
converting the components X (i, j), Y (i, j), Z (i, j) of the XYZ color space into the components L (i, j), a (i, j), b (i, j) of the Lab color space:
L*=116f(Y/Yn)-16
a*=500[f(X/Xn)-f(Y/Yn)] (5)
b*=200[f(Y/Yn)-f(Z/Zn)]
wherein Xn=95.047,Yn=100,Zn=108.883;
S205, calculating the color distance of the pixel points in the Lab color space;
specifically, two pixel points are respectively (L1, a1, b1) and (L2, a2, b2), and the difference value is calculated for each dimension independently, as shown in formula (6), the color distance can provide preparation work for the dimension algorithm;
Figure BDA0003194405610000091
and S206, performing spatial calculation of the gradient operator according to the gradient value corresponding to the filter, and finishing the spatial calculation of the gradient operator of the 7 x 7 filter according to the color distance.
As shown in formulas (7) to (15), in particular, the following formula:
the gradient operator formula in the 0 ° direction is:
D0=CD(Xi-2,j-1,Xi-2,j+1)+2CD(Xi-2,j-1,Xi-2,j+1+4CD(Xi-1,j-1,Xi-1,j+1)+8CD(Xi,j-1,Xi,j+1)+4CD(Xi+1,j-1,Xi+1,j+1)+2CD(Xi+2,j-1,Xi+2,j+1)+CD(Xi+2,j-1,Xi+2,j+1); (7)
the gradient operator formula in the 45 ° direction is:
D1=CD(Xi-2,j-1,Xi+2,j+1)+2CD(Xi-2,j,Xi+2,j+4CD(Xi-1,j+1,Xi+1,j-1)+8CD(Xi,j-1,Xi,j+1)+2CD(Xi-2,j-2,Xi+2,j+2)+4CD(Xi-1,j-1,Xi+·1,j+1)+CD(Xi-1,j,Xi-1,j); (8)
the gradient operator formula in the 90 ° direction is:
D2=CD(Xi+2,j-3,Xi+3,j-2)+2CD(Xi+1,j-2,Xi+2,j-1+8CD(Xi,j-1,Xi+1,j)+4CD(Xi-1,j-1,Xi+1,j+1)+8CD(Xi-1,j,Xi,j+1)+2CD(Xi-2,j+1,Xi-1,j+2)+CD(Xi-2,j+2,Xi-2,j); (9)
the gradient operator formula in the 135 ° direction is:
D3=CD(Xi+1,j-2,Xi+1,j+2)+2CD(Xi,j-2,Xi+2,j+2+4CD(Xi-1,j-1,Xi+1,j+1)+8CD(Xi-1,j,Xi+1,j)+2CD(Xi-1,j+1,Xi+1,j-1)+4CD(Xi,j+1,Xi,j-1)+8CD(Xi+1,j,Xi+1,j); (10)
the gradient operator formula in the 180 ° direction is:
D4=CD(Xi-2,j-2,Xi+2,j-2)+2CD(Xi-1,j-2,Xi+1,j-2+4CD(Xi-1,j-1,Xi+1,j-1)+8CD(Xi-1,j,Xi-1,j)+4CD(Xi-1,j-2,Xi+1,j-2)+2CD(Xi-1,j-2,Xi+1,j-2)+CD(Xi-2,j+2,Xi+2,j+2); (11)
the gradient operator formula in the 225 ° direction is:
D5=CD(Xi+1,j-2,Xi+1,j+2)+2CD(Xi,j-2,Xi+2,j+2+4CD(Xi-1,j-1,Xi+1,j+1)+8CD(Xi-1,j,Xi+1,j)+2CD(Xi-1,j+1,Xi+1,j-1)+4CD(Xi,j+1,Xi,j-1)+8CD(Xi+1,j,Xi+1,j); (12)
the gradient operator formula in the 270 ° direction is:
D6=CD(Xi-2,j-2,Xi-2,j-2)+8CD(Xi-1,j-2,Xi-2,j-1+2CD(Xi,j-1,Xi-1,j)+4CD(Xi+1,j-1,Xi-1,j+1)+2CD(Xi+1,j,Xi,j+1)+8CD(Xi+2,j+1,Xi+1,j+2)+CD(Xi+2,j+2,Xi+2,j+2); (13)
the gradient operator formula in the 315 ° direction is:
D7=CD(Xi-2,j-1,Xi+3,j+1)+2CD(Xi-2,j,Xi+2,j+4CD(Xi-1,j+1,Xi+1,j-1)+8CD(Xi,j-1,Xi,j+1)+2CD(Xi-1,j-1,Xi+1,j+1)+4CD(Xi-1,j,Xi+1,j)+8CD(Xi-2,j-1,Xi+2,j+1); (14)
s207, performing edge detection processing of a Sobel algorithm according to Lab color space pixel points, dividing an image plane into 8 gradient directions by taking 45-degree as a space interval, and performing pixel data processing one by one;
the specific data processing formula is shown as formula (15):
Figure BDA0003194405610000101
s208, converting the pixel points in the Lab color space into an XYZ color space according to the pixel points as output values;
specifically, the color space conversion function formula is shown in formula (16):
Figure BDA0003194405610000102
s209: extracting components L (i, j), a (i, j), b (i, j) of the Lab color space, and converting the components L (i, j), a (i, j), b (i, j) into XYZ color space components X (i, j), Y (i, j), Z (i, j);
specifically, as shown in formula (17):
Figure BDA0003194405610000103
wherein Xn is 95.047, Yn is 100, Zn is 108.883;
s210: converting the components X (i, j), Y (i, j), Z (i, j) of the XYZ color space into RGB color space components R (i, j), G (i, j), B (i, j);
specifically, as shown in equation (18):
Figure BDA0003194405610000111
s211: according to the output image data, comparing indexes of a source image and an image processed by an edge algorithm, and judging the abrasion degree of the processed image; if the abrasion degree meets a preset condition, outputting the processing image; and if the abrasion degree does not meet the preset condition, re-executing the image processing operation.
Specifically, the gray values of the source image and the image after edge algorithm processing are calculated according to formula (19). Wherein, the Ugray _ origin is a gray level mean value of the source image, and the Ugray _ stand is a gray level mean value of the image after the edge detection processing. The source image and the edge detection processed image are both RGB images. R1(Xi, Xj), G1(Xi, Xj), and B1(Xi, Xj) are average pixel points of the source image, and R2(Xi, Xj), G2(Xi, Xj), and B2(Xi, Xj) are average pixel points of the image after the edge detection processing.
Figure BDA0003194405610000112
Gorgin and Gstand are the covariance of the source image and the image after edge detection processing, respectively. P1(Xi, Xj) and P2(Xi, Xj) are pixel point sets of the image field after the source image and the edge detection processing respectively. u1ij and u2ij are the average values of the domain pixel point sets of the source image and the image after edge detection processing respectively.
Figure BDA0003194405610000113
And calculating the index values of the source image and the image processed by the edge detection algorithm according to the formula:
Figure BDA0003194405610000121
gorgin _ stad is the covariance of the source image and the edge detection processed image:
Figure BDA0003194405610000122
comparing the values of the two formulas, wherein the smaller the absolute value of the difference between the index value and the covariance value, the less the abrasion degree of the processed image; namely, the image data after being processed by the representative edge detection algorithm has less abrasion degree when the two values of V and Gorgin _ stad are closer; on the contrary, when the difference between V and Gorigin _ stad is larger, the image data processed by the representative edge detection algorithm has more abrasion, and the image processing is performed again.
The embodiment of the disclosure uses MRAM to replace the cache work of DRAM in the edge detection system, and uses the characteristics of fast read-write speed, non-volatility and the like of MRAM. And after the edge detection algorithm is processed, comparing the image data abrasion degree of the original image and the image processed by the edge detection image, and judging whether the image data is output currently. Meanwhile, the non-volatile memory MRAM is applied to replace the cache work of the volatile DRAM in the edge detection algorithm. Since the MRAM has a fast data read capability and non-volatility, the data caching work can be improved. When the system is suddenly powered off, the data information can be stored, and after the system is started, the system can continue to carry out the edge detection processing technology.
And (3) calculating image edge detection data by applying a multidirectional edge detection algorithm through sobel and combining a differential derivation and Gaussian smoothing formula with the multidirectional detection algorithm through a weighting formula, and detecting image edge information in different directions in multiple directions in a Lab color space. The algorithm improves the accuracy of the edge detection algorithm. And comparing the image wear indexes by applying the source image and the image processed by the edge algorithm. When the data indexes of the two images are close, the image data processed by the edge detection algorithm is proved to have lower abrasion degree; on the contrary, the image data processed by the edge detection algorithm has higher abrasion degree. The system determines whether to carry out actual output according to the requirements of an actual scene and the image data abrasion degree obtained by the algorithm; by calculating indexes of the source image and the image after edge detection processing, when the index data of the two image data are close to each other, the image data processed by the edge detection algorithm has lower abrasion degree; on the contrary, the image data processed by the edge detection algorithm has higher abrasion degree, and the system can selectively output the image data according to the actual application requirement.
The embodiment of the present disclosure also discloses an image edge detection apparatus 300 based on STT-MRAM, as shown in fig. 3, the apparatus includes:
the real-time acquisition module 301 is used for storing a source image acquired in real time into an STT-MRAM memory;
a first color space conversion module 302 for converting the source image from an original color space to a target color space;
a multi-direction edge detection module 303, configured to perform image processing on the source image converted into the target color space, perform multi-direction edge detection using a Sobel algorithm, and output a processed image;
a second color space conversion module 304 for converting the processed image from the target color space to the original color space;
a wear degree determination module 305, configured to determine a wear degree of the processed image by comparing the source image with the processed image converted into the original color space; if the abrasion degree meets a preset condition, outputting the processing image; and if the abrasion degree does not meet the preset condition, re-executing the image processing operation.
In the embodiment of the disclosure, the STT-MRAM is used as storage to replace DRAM memory commonly used in the system. STT-MRAM memory is driven using an ST-DDR3 controller. Since STT-MRAM storage is non-volatile, the ST-DDR3 controller reduces power refresh operations of the STT-MRAM memory chip. The electric quantity is saved for the whole system application, the system endurance is improved, and the problem of system data reading delay is reduced. Image data captured by the camera module is converted into an XYZ color space from an RGB color space and finally converted into a Lab color space, a Sobel multidirectional edge detection process is executed in the Lab color space, and the data processed by the algorithm is converted into the RGB color space through the XYZ color space so as to perform high-quality edge detection processing. By comparing the indexes of the image data after the edge detection processing with the indexes of the source image, when the index data are close, the data abrasion degree after the edge detection algorithm processing is represented to be less, high-quality edge detection graph line data are provided, and the precision and the accuracy of the edge detection algorithm are improved.
It is to be understood that the above-described specific embodiments of the present disclosure are merely illustrative of or illustrative of the principles of the present disclosure and are not to be construed as limiting the present disclosure. Accordingly, any modification, equivalent replacement, improvement or the like made without departing from the spirit and scope of the present disclosure should be included in the protection scope of the present disclosure. Further, it is intended that the following claims cover all such variations and modifications that fall within the scope and bounds of the appended claims, or equivalents of such scope and bounds.

Claims (10)

1. A STT-MRAM-based image edge detection method, the method comprising:
storing a source image acquired in real time into an STT-MRAM memory;
converting the source image from an original color space to a target color space;
performing image processing operation on the source image converted into the target color space, performing multidirectional edge detection by using a Sobel algorithm, and outputting a processed image;
converting the processed image from the target color space to the original color space;
judging the abrasion degree of the processed image by comparing the source image with the processed image converted into the original color space; if the abrasion degree meets a preset condition, outputting the processing image; and if the abrasion degree does not meet the preset condition, re-executing the image processing operation.
2. The method according to claim 1, wherein converting the source image from an original color space to a target color space comprises:
and converting the source image from the original color space to a second color space, and then converting the source image from the second color space to a target color space.
3. The method of claim 2, wherein converting the source image from an original color space to a second color space further comprises: completing a spatial co-range mapping between the original color space and the second color space on the source image.
4. Method according to claim 2, wherein the original color space is in particular an RGB color space; the second color space is specifically an XYZ color space; the target color space is specifically a Lab color space.
5. The method according to claim 1, wherein said performing image processing operations comprises in particular: and calculating the color distance between pixel points in the source image converted into the target color space, and calculating a gradient operator space according to the color distance and the gradient value corresponding to the filter.
6. The method of claim 1, wherein the performing multi-directional edge detection using the Sobel algorithm specifically comprises: and dividing the source image converted into the target color space into one or more gradient directions by taking a preset angle as a direction interval, and performing pixel data processing one by one to output a processed image.
7. The method according to claim 1, wherein determining the degree of wear of the processed image by comparing the source image with the processed image transformed into the original color space comprises: and calculating index data of the source image and the processed image converted into the original color space, and judging the abrasion degree of the processed image according to the index data.
8. The method according to claim 7, wherein the metric data specifically comprises a metric value and a covariance value.
9. The method according to claim 8, wherein determining the degree of wear of the processed image from the indicator data specifically comprises: the smaller the absolute value of the index value and the covariance value difference is, the less the degree of wear of the processed image is.
10. An STT-MRAM-based image edge detection apparatus, the apparatus comprising:
the real-time acquisition module is used for storing the source image acquired in real time into the STT-MRAM memory;
the first color space conversion module is used for converting the source image from an original color space to a target color space;
the multidirectional edge detection module is used for executing image processing operation on the source image converted into the target color space, executing multidirectional edge detection by using a Sobel algorithm and outputting a processed image;
a second color space conversion module for converting the processed image from the target color space to the original color space;
the abrasion degree judging module is used for judging the abrasion degree of the processed image by comparing the source image with the processed image converted into the original color space; if the abrasion degree meets a preset condition, outputting the processing image; and if the abrasion degree does not meet the preset condition, re-executing the image processing operation.
CN202110886678.4A 2021-08-03 2021-08-03 Image edge detection method and device based on STT-MRAM Pending CN113487641A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110886678.4A CN113487641A (en) 2021-08-03 2021-08-03 Image edge detection method and device based on STT-MRAM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110886678.4A CN113487641A (en) 2021-08-03 2021-08-03 Image edge detection method and device based on STT-MRAM

Publications (1)

Publication Number Publication Date
CN113487641A true CN113487641A (en) 2021-10-08

Family

ID=77944111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110886678.4A Pending CN113487641A (en) 2021-08-03 2021-08-03 Image edge detection method and device based on STT-MRAM

Country Status (1)

Country Link
CN (1) CN113487641A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114416184A (en) * 2021-12-06 2022-04-29 北京航空航天大学 Memory computing method and device based on virtual reality equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1913573A (en) * 2005-08-09 2007-02-14 佳能株式会社 Image processing apparatus for image retrieval and control method therefor
US20110176726A1 (en) * 2009-12-29 2011-07-21 Postech Academy - Industry Foundation Method of converting color image into grayscale image and recording medium storing program for performing the same
CN102999916A (en) * 2012-12-12 2013-03-27 清华大学深圳研究生院 Edge extraction method of color image
KR20130032127A (en) * 2011-09-22 2013-04-01 상명대학교 산학협력단 Method of abstraction rendering image and apparatus adopting the method
CN104537756A (en) * 2015-01-22 2015-04-22 广州广电运通金融电子股份有限公司 Banknote classification and identification method and device based on Lab color space
KR20190042429A (en) * 2017-10-15 2019-04-24 알레시오 주식회사 Method for image processing

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1913573A (en) * 2005-08-09 2007-02-14 佳能株式会社 Image processing apparatus for image retrieval and control method therefor
US20110176726A1 (en) * 2009-12-29 2011-07-21 Postech Academy - Industry Foundation Method of converting color image into grayscale image and recording medium storing program for performing the same
KR20130032127A (en) * 2011-09-22 2013-04-01 상명대학교 산학협력단 Method of abstraction rendering image and apparatus adopting the method
CN102999916A (en) * 2012-12-12 2013-03-27 清华大学深圳研究生院 Edge extraction method of color image
CN104537756A (en) * 2015-01-22 2015-04-22 广州广电运通金融电子股份有限公司 Banknote classification and identification method and device based on Lab color space
US20180005478A1 (en) * 2015-01-22 2018-01-04 Grg Banking Equipment Co., Ltd. Banknote classification and identification method and device based on lab color space
KR20190042429A (en) * 2017-10-15 2019-04-24 알레시오 주식회사 Method for image processing
CN111226258A (en) * 2017-10-15 2020-06-02 阿莱西奥公司 Signal conversion system and signal conversion method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曾俊;李德华;: "彩色图像SUSAN边缘检测方法", 计算机工程与应用, no. 15, pages 198 - 200 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114416184A (en) * 2021-12-06 2022-04-29 北京航空航天大学 Memory computing method and device based on virtual reality equipment
CN114416184B (en) * 2021-12-06 2023-08-01 北京航空航天大学 In-memory computing method and device based on virtual reality equipment

Similar Documents

Publication Publication Date Title
CN106780592B (en) Kinect depth reconstruction method based on camera motion and image shading
CN111210477B (en) Method and system for positioning moving object
WO2016062159A1 (en) Image matching method and platform for testing of mobile phone applications
US7885528B2 (en) System and method for focusing a charge coupled device lens on a selected surface of an object
CN108198216A (en) A kind of robot and its position and orientation estimation method and device based on marker
CN110853100A (en) Structured scene vision SLAM method based on improved point-line characteristics
KR20170131500A (en) 3D modeling method and apparatus
US11094082B2 (en) Information processing apparatus, information processing method, robot system, and non-transitory computer-readable storage medium
CN102980535A (en) Angle measurement method and device
CN113011401B (en) Face image posture estimation and correction method, system, medium and electronic equipment
Albanis et al. Pano3d: A holistic benchmark and a solid baseline for 360 depth estimation
CN103632338A (en) Matching curve feature based image registration evaluating method
CN113487641A (en) Image edge detection method and device based on STT-MRAM
CN112465877A (en) Kalman filtering visual tracking stabilization method based on motion state estimation
CN116152218A (en) Intelligent detection method and device for construction quality
CN111951193A (en) Method and apparatus for correcting horizontal distortion of image
CN111739071A (en) Rapid iterative registration method, medium, terminal and device based on initial value
CN109712200B (en) Binocular positioning method and system based on least square principle and side length reckoning
CN113033578B (en) Image calibration method, system, terminal and medium based on multi-scale feature matching
CN113284181A (en) Scene map point and image frame matching method in environment modeling
CN114399526A (en) Pose determination method, pose determination device and storage medium
CN110211239B (en) Augmented reality method, apparatus, device and medium based on label-free recognition
Sun et al. Vision odometer based on RGB-D camera
Yan et al. PLPF‐VSLAM: An indoor visual SLAM with adaptive fusion of point‐line‐plane features
Huai et al. Markov parallel tracking and mapping for probabilistic slam

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20231228

Address after: Room 1605, Building 1, No. 117 Yingshan Red Road, Huangdao District, Qingdao City, Shandong Province, 266400

Applicant after: Qingdao Haicun Microelectronics Co.,Ltd.

Address before: 100191 rooms 504a and 504b, 5th floor, 23 Zhichun Road, Haidian District, Beijing

Applicant before: Zhizhen storage (Beijing) Technology Co.,Ltd.

TA01 Transfer of patent application right