CN116051681A - Processing method and system for generating image data based on intelligent watch - Google Patents

Processing method and system for generating image data based on intelligent watch Download PDF

Info

Publication number
CN116051681A
CN116051681A CN202310190279.3A CN202310190279A CN116051681A CN 116051681 A CN116051681 A CN 116051681A CN 202310190279 A CN202310190279 A CN 202310190279A CN 116051681 A CN116051681 A CN 116051681A
Authority
CN
China
Prior art keywords
image data
pixel
color
filling
pixel points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310190279.3A
Other languages
Chinese (zh)
Other versions
CN116051681B (en
Inventor
吴贤荣
曾贤富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Light Speed Times Technology Co ltd
Original Assignee
Shenzhen Light Speed Times Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Light Speed Times Technology Co ltd filed Critical Shenzhen Light Speed Times Technology Co ltd
Priority to CN202310190279.3A priority Critical patent/CN116051681B/en
Publication of CN116051681A publication Critical patent/CN116051681A/en
Application granted granted Critical
Publication of CN116051681B publication Critical patent/CN116051681B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a processing method and a system for generating image data based on an intelligent watch, which are applied to the field of image data processing; according to the invention, after the acquired image data set is subjected to the generation to obtain a plurality of image data, the matched image data capable of being matched with the pixel value of the image data set and the unmatched image data incapable of being matched with the pixel value of the image data set are classified from the image data, and the unmatched image data are subjected to pixel point filling and color difference filling to generate the processed image data capable of being matched with the pixel value of the image data set, so that the original definition and shooting effect of a shot time period picture can be maintained even if the shot time period picture has unintentional focus offset.

Description

Processing method and system for generating image data based on intelligent watch
Technical Field
The invention relates to the field of image data processing, in particular to a processing method and a system for generating image data based on an intelligent watch.
Background
Along with the high-speed development of technology at present, users have higher requirements on watch shooting, and the requirement of shooting panoramic content through the watch is always met, and larger or panoramic pictures are obtained through visual information collection at different angles.
However, in the time period picture shot by the prior intelligent watch through the common lens, the focus of shooting is slightly shifted, so that the definition in the time period picture can be changed, and the shot time period picture can not completely reach the original shooting effect.
Disclosure of Invention
The invention aims to solve the problem that in a time period picture shot by a conventional intelligent watch through a common lens, the definition in the time period picture can be changed due to slight deviation of the focus center of shooting, so that the shot time period picture cannot completely reach the original shooting effect.
The invention adopts the following technical means for solving the technical problems:
the invention provides a processing method for generating image data based on a smart watch, which comprises the following steps:
collecting an image data set by adopting an image sensor, dividing the image data set into a plurality of image data based on a preset period, substituting the plurality of image data into a preset independent space, performing edge detection on the plurality of image data, and judging whether pixel values of the plurality of image data are matched;
If not, classifying to obtain matching image data and non-matching image data, acquiring a plurality of pixel values in the non-matching image data, identifying at least one pixel point in the plurality of pixel values, presenting image resolution corresponding to the plurality of pixel values according to the at least one pixel point, performing differential comparison on the image resolution and the matching image data, and generating a missing pixel area which is required to be filled and corresponds to the non-matching image data based on at least the number of missing pixel points in each pixel value;
for edge pixel points in the missing pixel region, confirming adjacent spans among all pixel points in the missing pixel region, capturing at least one missing pixel point position in the missing pixel region, constructing filling queues of all missing pixel points based on boundary distance mapping values of all missing pixel point positions and the edge pixel points, and carrying out iterative filling on all missing pixel points according to the filling queues to generate non-matching image data with color differences;
based on a first corresponding relation between pixel points and color factors which are pre-established in the matched image data, constructing a second corresponding relation between the pixel points and the color factors in the non-matched image data with color differences, comparing the first corresponding relation with the second corresponding relation in a different mode to obtain a non-overlapping part of the color factors of the first corresponding relation and the second corresponding relation, taking the non-overlapping part as a residual color filling path of the non-matched image data, and performing color filling on the non-matched image data with color differences according to the residual color filling path to generate processed image data.
Further, the step of determining the first correspondence between the pixel points and the color factors based on the pre-established pixel points in the matching image data includes:
carrying out coordinate coding on each pixel point of the matched image data based on a coordinate point set in a preset color space, and recording corresponding coordinates and corresponding color values of each pixel point based on the color space, wherein the corresponding coordinates comprise x-point coordinate coefficients and y-point coordinate coefficients, and the corresponding color values comprise red colors of three primary colors, green colors of three primary colors and blue colors of the three primary colors;
and acquiring regional colors of the image contour according to the image contour constructed by each pixel point, sequentially giving colors to each adjacent regional color based on the base point center of the color space, and generating a first corresponding relation between each pixel point and the color factor.
Further, the step of iteratively filling the missing pixel points according to the filling queue to generate non-matching image data with color differences includes:
generating filling values of the missing pixel points based on the boundary distance mapping values;
Judging whether the filling value is larger than the boundary distance mapping value or not;
if not, generating filling values of the missing pixel points, marking the filling values of the missing pixel points according to the filling queues, and filling the missing pixel points based on preset priorities, so as to obtain non-matching image data with different colors, wherein the preset priorities are specifically priorities with filling values from high to low.
Further, the step of generating the non-matching image data corresponding to the missing pixel region to be filled based on at least the number of missing pixels in each pixel value includes:
acquiring the number of the pixel points corresponding to a single pixel value in non-matching image data;
judging whether the number of the pixel points is the same as the number of the pixel points of the matched image data;
if not, collecting pixel difference values between the number of the pixel points and the number of the pixel points, scanning missing pixel point coordinates corresponding to the pixel difference values based on a preset pixel coordinate set, identifying specific orientations corresponding to the missing pixel points in the pixel coordinate set, connecting the specific orientations, and establishing missing pixel areas which are required to be filled and correspond to the non-matching image data.
Further, the step of comparing the image resolution with the matching image data according to the image resolution corresponding to the pixel values presented by the at least one pixel point includes:
acquiring the number of pixel points existing in each inch of area in the non-matching image data;
judging whether the pixel number is matched with a preset pixel number sequence or not;
if not, generating a difference sequence corresponding to the pixel points, acquiring missing pixel points in the difference sequence based on the pixel point sequence, and identifying the horizontal direction corresponding to the missing pixel points in the difference sequence to obtain a sequence difference value of the missing pixel points, wherein the horizontal direction comprises a horizontal axis and a vertical axis.
Further, the step of performing color filling on the non-matching image data with the color difference according to the color filling path, and generating processed image data further includes:
extracting texture features of the processed image data to obtain at least one texture feature of the processed image data;
overlapping and fusing the at least one texture feature and the matched image data to obtain a fusion error coefficient corresponding to the overlapping and fusing;
Judging whether the fusion error coefficient reaches a preset matching coefficient or not;
if not, generating a difference value corresponding to the fusion error coefficient and the preset matching coefficient, and continuing to perform superposition fusion on the at least one texture feature and the matching image data according to a preset increment until the fusion error coefficient corresponding to superposition fusion reaches the preset matching coefficient, stopping superposition fusion, and obtaining a superposition fusion image data set.
Further, the step of substituting the plurality of image data into a preset independent space and performing edge detection on the plurality of image data includes:
identifying image features in the image data, and acquiring the image features based on occupation ratios in the image data, wherein the image features comprise color factors and texture factors;
and extracting inferior image data with the occupation proportion lower than a preset average level from the image data, and taking the inferior image data as a detection object of the edge detection.
The invention also provides a processing system for generating image data based on the intelligent watch, which comprises:
the judging module is used for acquiring an image data set by adopting an image sensor, dividing the image data set into a plurality of image data based on a preset period, substituting the plurality of image data into a preset independent space, carrying out edge detection on the plurality of image data, and judging whether pixel values of the plurality of image data are matched;
The execution module is used for classifying and obtaining matching image data and non-matching image data if not, acquiring a plurality of pixel values in the non-matching image data, identifying at least one pixel point in the plurality of pixel values, presenting image resolution corresponding to the plurality of pixel values according to the at least one pixel point, comparing the image resolution with the matching image data in a different way, and generating a missing pixel area which is required to be filled and corresponds to the non-matching image data based on at least the number of missing pixel points in each pixel value;
the generating module is used for confirming adjacent spans among all pixel points in the missing pixel region aiming at the edge pixel points in the missing pixel region, capturing at least one missing pixel point position in the missing pixel region, constructing filling queues of all missing pixel points based on boundary distance mapping values of all missing pixel point positions and the edge pixel points, and carrying out iterative filling on all missing pixel points according to the filling queues to generate unmatched image data with color differences;
and the filling module is used for constructing a second corresponding relation between the pixel points and the color factors in the non-matching image data with the color difference based on a first corresponding relation between the pixel points and the color factors which are pre-established in the matching image data, comparing the first corresponding relation with the second corresponding relation in a different way to obtain a non-overlapping part of the color factors of the first corresponding relation and the second corresponding relation, taking the non-overlapping part as a residual color filling path of the non-matching image data, and performing color filling on the non-matching image data with the color difference according to the residual color filling path to generate processed image data.
Further, the filling module further includes:
a recording unit, configured to coordinate-encode each pixel point of the matched image data based on a coordinate point set in a preset color space, and record corresponding coordinates and corresponding color values of each pixel point in the color space, where the corresponding coordinates include an x-point coordinate coefficient and a y-point coordinate coefficient, and the corresponding color values include red color of three primary colors, green color of three primary colors, and blue color of three primary colors;
and the generating unit is used for acquiring the regional color of the image contour according to the image contour constructed by each pixel point, sequentially giving colors to each adjacent regional color based on the base point center of the color space, and generating a first corresponding relation between each pixel point and the color factor.
Further, the generating module further includes:
a second generating unit, configured to generate a filling value of each missing pixel point based on the boundary distance mapping value;
a judging unit, configured to judge whether the filling value is greater than the boundary distance mapping value;
and the execution unit is used for generating filling values of the missing pixel points if not, marking the filling values of the missing pixel points according to the filling queues, filling the missing pixel points based on preset priorities, and obtaining non-matching image data with different colors by filling, wherein the preset priorities are specifically priorities with filling values from high to low.
The invention provides a processing method and a system for generating image data based on an intelligent watch, which have the following beneficial effects:
according to the invention, after the acquired image data set is subjected to the generation to obtain a plurality of image data, the matched image data capable of being matched with the pixel value of the image data set and the unmatched image data incapable of being matched with the pixel value of the image data set are classified from the image data, and the unmatched image data are subjected to pixel point filling and color difference filling to generate the processed image data capable of being matched with the pixel value of the image data set, so that the original definition and shooting effect of a shot time period picture can be maintained even if the shot time period picture has unintentional focus offset.
Drawings
FIG. 1 is a flow chart of an embodiment of a processing method for generating image data based on a smart watch according to the present invention;
fig. 2 is a block diagram illustrating an embodiment of a processing system for generating image data based on a smart watch according to the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present invention, as the achievement of the object, functional features and advantages of the present invention will be further described with reference to the embodiments, with reference to the accompanying drawings.
The technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a processing method for generating image data based on a smart watch according to an embodiment of the present invention includes:
s1: collecting an image data set by adopting an image sensor, dividing the image data set into a plurality of image data based on a preset period, substituting the plurality of image data into a preset independent space, performing edge detection on the plurality of image data, and judging whether pixel values of the plurality of image data are matched;
s2: if not, classifying to obtain matching image data and non-matching image data, acquiring a plurality of pixel values in the non-matching image data, identifying at least one pixel point in the plurality of pixel values, presenting image resolution corresponding to the plurality of pixel values according to the at least one pixel point, performing differential comparison on the image resolution and the matching image data, and generating a missing pixel area which is required to be filled and corresponds to the non-matching image data based on at least the number of missing pixel points in each pixel value;
S3: for edge pixel points in the missing pixel region, confirming adjacent spans among all pixel points in the missing pixel region, capturing at least one missing pixel point position in the missing pixel region, constructing filling queues of all missing pixel points based on boundary distance mapping values of all missing pixel point positions and the edge pixel points, and carrying out iterative filling on all missing pixel points according to the filling queues to generate non-matching image data with color differences;
s4: based on a first corresponding relation between pixel points and color factors which are pre-established in the matched image data, constructing a second corresponding relation between the pixel points and the color factors in the non-matched image data with color differences, comparing the first corresponding relation with the second corresponding relation in a different mode to obtain a non-overlapping part of the color factors of the first corresponding relation and the second corresponding relation, taking the non-overlapping part as a residual color filling path of the non-matched image data, and performing color filling on the non-matched image data with color differences according to the residual color filling path to generate processed image data.
In this embodiment, the system acquires image data sets recorded during a shooting process by using an image sensor preset in the smart watch, and generates the image data sets into a plurality of image data based on a preset time period, and then substitutes the image data into a preset independent space to perform edge detection one by one, so as to determine whether pixel values in the image data are matched with each other, so as to execute corresponding steps; for example, when the pixel values in the image data can be matched with each other, that is, when the time period pictures shot by the intelligent watch can be mutually connected, no condition of blurring of definition or frame dropping of shot pictures exists; for example, when the pixel values in the image data cannot be matched with each other, i.e. the time period pictures shot by the intelligent watch cannot be mutually connected, the system classifies the image data at this time to obtain image data capable of being matched and unmatched image data, identifies each pixel value by acquiring a plurality of pixel values in the unmatched image data to obtain at least one pixel point in the pixel values, performs differential comparison on the image resolution and the image data capable of being matched according to the image resolution of the pixel points which are correspondingly presented by the pixel values, and then obtains the number of missing pixel points of the pixel values in the unmatched image data from the differential comparison result of the image resolution, and generates a missing pixel region which is required to be filled and corresponds to the unmatched image data; after confirming adjacent spans among all pixel points in the missing pixel area, the system captures the position of at least one missing pixel point in the missing pixel area, constructs a filling queue of all missing pixel points based on the position of each missing pixel point and the boundary distance mapping value of the edge pixel point, and then the system carries out iterative filling on all missing pixel points in the missing pixel area according to the filling queue so that pixels of non-matching image data and pixels of matching image data can be consistent to generate color difference non-matching image data of colors to be filled; the system constructs a second corresponding relation between the pixel point proportion and the color factor proportion in the color difference non-matching image data based on a first corresponding relation between the pixel point proportion and the color factor proportion which are pre-established in the matching image data, and after the first corresponding relation is subjected to differential comparison with the second corresponding relation, the color factor difference between the color difference non-matching image data and the matching image data can be obtained, the non-overlapping part of the color factors of the first corresponding relation and the second corresponding relation is used as a residual color filling path of the non-matching image data, the residual color filling path performs color filling on the color difference non-matching image data, and the image data which is processed correspondingly by the color difference non-matching image data is generated.
It should be noted that, the pixel filling is required to be filled into a plurality of pixel points in the pixel value area, and the specific number of the pixel points needing to be filled from outside to inside in the whole pixel value area can be obtained through the boundary distance mapping value of the edge pixel points, and the pixel points are filled from outside to inside one by one through iterative filling, so that the occurrence of error leakage in the filling process can be avoided, and the integrity of the image data after the pixel filling can be ensured.
In this embodiment, the step S4 based on the first correspondence between the pixel points and the color factors pre-established in the matching image data includes:
s41: carrying out coordinate coding on each pixel point of the matched image data based on a coordinate point set in a preset color space, and recording corresponding coordinates and corresponding color values of each pixel point based on the color space, wherein the corresponding coordinates comprise x-point coordinate coefficients and y-point coordinate coefficients, and the corresponding color values comprise red colors of three primary colors, green colors of three primary colors and blue colors of the three primary colors;
s42: and acquiring regional colors of the image contour according to the image contour constructed by each pixel point, sequentially giving colors to each adjacent regional color based on the base point center of the color space, and generating a first corresponding relation between each pixel point and the color factor.
In this embodiment, the system performs coordinate encoding on each data point in the matched image data based on a preset color space coordinate point set by using each pixel point of the matched image data, generates a coordinate coefficient corresponding to each pixel point and a color value corresponding to the coordinate coefficient in the color space coordinate point set, constructs a generated image contour from the coordinate point set according to each pixel point, obtains a region color to be filled in the image contour, and performs color assignment to each adjacent region color in sequence based on a base point center of the color space, so that a first correspondence relationship between the pixel point and a color factor in the matched image data can be generated;
it should be noted that, the process of color assignment is sequentially performed on the colors of each adjacent region based on the base point center of the color space, specifically, color assignment is sequentially performed by (+/-1, ±1) - (±2 ) … … (+/-x, ±y) until all the colors of the regions are completely filled, and the length of the color assignment process is based on how many colors of the regions to be filled exist in the color space.
In this embodiment, the step S3 of iteratively filling the missing pixel points according to the filling queue to generate the non-matching image data with the color difference includes:
S31: generating filling values of the missing pixel points based on the boundary distance mapping values;
s32: judging whether the filling value is larger than the boundary distance mapping value or not;
s33: if not, generating filling values of the missing pixel points, marking the filling values of the missing pixel points according to the filling queues, and filling the missing pixel points based on preset priorities, so as to obtain non-matching image data with different colors, wherein the preset priorities are specifically priorities with filling values from high to low.
In this embodiment, the system generates values to be filled for each missing pixel point based on the boundary distance mapping values of each missing pixel point position and the edge pixel point, and determines whether the values to be filled are greater than the boundary distance mapping values, so as to execute corresponding steps; for example, when the to-be-filled value of a certain missing pixel point is greater than the boundary distance mapping value, if the missing pixel point is continuously filled, the pixel overlapping condition occurs in the photographed image, and the pixel overlapping degree increases along with the overflow of the filling value beyond the boundary distance mapping value, so that when the to-be-filled value of the pixel point is about to be greater than the boundary distance mapping value, the filling value of the pixel point is controlled to be equal to the boundary distance mapping value, and the occurrence of pixel overlapping can be avoided; for example, when the to-be-filled value of the missing pixel points is not greater than the boundary distance mapping value, the system generates the corresponding filled value of each missing pixel point, marks the corresponding filled value of each missing pixel point based on the filling queue, fills each missing pixel point based on a preset priority, and can obtain non-matching image data with color difference after filling.
It should be noted that, marking each missing pixel point according to the filling queue is because the filling value of each missing pixel point is different, so that marking is needed to avoid error filling; the reason why the priority filling is set for each missing pixel point is that a plurality of missing pixel points to be filled possibly exist in the same horizontal axis or the same vertical axis at the same time, and the missing pixel points of the same horizontal axis or the same vertical axis are filled in order according to the pixel filling quantity from high to low, so that the situation of incorrect filling when the missing pixel points of the same horizontal axis or the same vertical axis are too many can be avoided.
In this embodiment, the step S2 of generating the missing pixel region to be filled corresponding to the non-matching image data based on at least the number of missing pixels in each pixel value includes:
s21: acquiring the number of the pixel points corresponding to a single pixel value in non-matching image data;
s22: judging whether the number of the pixel points is the same as the number of the pixel points of the matched image data;
s23: if not, collecting pixel difference values between the number of the pixel points and the number of the pixel points, scanning missing pixel point coordinates corresponding to the pixel difference values based on a preset pixel coordinate set, identifying specific orientations corresponding to the missing pixel points in the pixel coordinate set, connecting the specific orientations, and establishing missing pixel areas which are required to be filled and correspond to the non-matching image data.
In this embodiment, the system determines, by acquiring the number of pixels corresponding to a single pixel value in the non-matching image data, whether the number of pixels is the same as the number of pixels of the matching image data, so as to execute the corresponding step; for example, when the number of pixels can be the same as the number of pixels of the matching image data, that is, the missing pixel region representing the filling required in the non-matching image data at this time is filled; for example, when the number of pixels cannot be the same as that of the pixels of the matched image data, the system collects pixel difference values between the number of pixels and the number of pixels, scans missing pixel coordinates corresponding to the pixel difference values in a preset pixel coordinate set of the non-matched image data to identify a specific position of the missing pixel in the pixel coordinate set, and connects the missing pixels in the pixel coordinate set to establish a missing pixel area to be filled corresponding to the non-matched image data.
In this embodiment, the step S2 of comparing the image resolution with the matching image data according to the image resolution corresponding to the pixel values presented by the at least one pixel includes:
S201: acquiring the number of pixel points existing in each inch of area in the non-matching image data;
s202: judging whether the pixel number is matched with a preset pixel number sequence or not;
s203: if not, generating a difference sequence corresponding to the pixel points, acquiring missing pixel points in the difference sequence based on the pixel point sequence, and identifying the horizontal direction corresponding to the missing pixel points in the difference sequence to obtain a sequence difference value of the missing pixel points, wherein the horizontal direction comprises a horizontal axis and a vertical axis.
In this embodiment, the system determines whether the pixel points match a preset sequence of pixel points by acquiring the pixel points existing in each inch of area in the non-matching image data, so as to execute the corresponding steps; for example, the number of pixels in each inch of area in the non-matching image data is 2468, the preset pixel number sequence is 1600×900, the system determines that the number of pixels cannot match the preset pixel number sequence, the system generates a difference sequence 1582×880 of the non-matching image data, compares the difference sequence 1582×880 with the preset pixel number sequence 1600×900 to obtain missing pixel numbers 18×20 in the difference sequence, and identifies the horizontal axis of the missing pixel numbers in the difference sequence to obtain sequence difference values of the number of missing pixel numbers in the horizontal axis as a horizontal axis 18 and a vertical axis 20;
The number of pixels displayed in the image data is 1600×900, which means that the number of pixels in the horizontal axis is 1600 and the number of pixels in the vertical axis is 900, and the higher the resolution, the finer and finer the image display effect obtained when the screen size of the photographed image is the same.
In this embodiment, after step S3 of generating the processed image data, color filling is performed on the non-matching image data with the color difference according to the color filling path, the method further includes:
s301: extracting texture features of the processed image data to obtain at least one texture feature of the processed image data;
s302: overlapping and fusing the at least one texture feature and the matched image data to obtain a fusion error coefficient corresponding to the overlapping and fusing;
s303: judging whether the fusion error coefficient reaches a preset matching coefficient or not;
s304: if not, generating a difference value corresponding to the fusion error coefficient and the preset matching coefficient, and continuing to perform superposition fusion on the at least one texture feature and the matching image data according to a preset increment until the fusion error coefficient corresponding to superposition fusion reaches the preset matching coefficient, stopping superposition fusion, and obtaining a superposition fusion image data set.
In this embodiment, the system performs texture feature extraction on the processed image data to obtain at least one texture feature in the processed image data, and then performs superposition fusion on the texture features and the matched image data to generate a fusion error coefficient generated by superposition fusion, and determines whether the fusion error coefficient reaches a preset matching coefficient so as to execute a corresponding step; for example, when a texture feature is adopted to perform superposition fusion with the matched image data, the fusion error coefficient reached is 55%, and the preset matching coefficient is 58.5%, then the system generates a difference value 3.5% between the fusion error coefficient and the preset matching coefficient, and the superposition fusion is continued along with the texture feature, so that the obtained fusion error coefficient is 56.3%, and the preset matching coefficient 58.5% can be reached, namely, the superposition fusion process is stopped, and then the image dataset after superposition fusion is generated;
it should be noted that, the coefficient of fusion error coefficient and the coefficient reached by the preset matching coefficient are set within 3%, the preset increment is increased based on the increase of the number of texture features for initial superposition and fusion, if the fusion error coefficient after the initial superposition and fusion of two texture features and the matching image data cannot reach the preset matching coefficient, at this time, at least two or more texture features are needed to be continuously subjected to superposition and fusion.
In this embodiment, the step S1 of substituting the plurality of image data into a preset independent space and performing edge detection on the plurality of image data includes:
s11: identifying image features in the image data, and acquiring the image features based on occupation ratios in the image data, wherein the image features comprise color factors and texture factors;
s12: and extracting inferior image data with the occupation proportion lower than a preset average level from the image data, and taking the inferior image data as a detection object of the edge detection.
In this embodiment, the system obtains the occupation ratio of the image features in the image data by identifying the image features in each image data, specifically including the occupation ratio of the color factors and the texture factors, and then the system extracts the image data with the occupation ratio lower than the preset average level in each image data, determines these image data lower than the preset average level as inferior image data, and uses the inferior image data as the detection object to be subjected to the edge detection.
Referring to fig. 2, a processing system for generating image data based on a smart watch according to an embodiment of the present invention includes:
The judging module 10 is configured to collect an image data set by using an image sensor, divide the image data set into a plurality of image data based on a preset period, substitute the plurality of image data into a preset independent space, perform edge detection on the plurality of image data, and judge whether pixel values of the plurality of image data are matched;
the execution module 20 is configured to, if not, classify to obtain matching image data and non-matching image data, obtain a plurality of pixel values in the non-matching image data, identify at least one pixel point in the plurality of pixel values, present an image resolution corresponding to the plurality of pixel values according to the at least one pixel point, compare the image resolution with the matching image data in a differential manner, and generate a missing pixel area to be filled corresponding to the non-matching image data based on at least the number of missing pixel points in each pixel value;
the generating module 30 is configured to confirm, for edge pixel points in the missing pixel area, adjacent spans between pixel points in the missing pixel area, capture at least one missing pixel point position in the missing pixel area, construct a fill queue of each missing pixel point based on a boundary distance mapping value of each missing pixel point position and the edge pixel point, and iteratively fill each missing pixel point according to the fill queue to generate non-matching image data with color differences;
And the filling module 40 is configured to construct a second correspondence between pixel points and color factors in the non-matching image data with color differences based on a first correspondence between the pre-established pixel points and the color factors in the matching image data, compare the first correspondence with the second correspondence in a differential manner to obtain a non-overlapping portion of the color factors of the first correspondence and the second correspondence, take the non-overlapping portion as a residual color filling path of the non-matching image data, and fill the non-matching image data with colors according to the residual color filling path to generate processed image data.
In this embodiment, the judging module 10 collects image data sets recorded in the shooting process by adopting an image sensor preset in the smart watch, and divides the image data sets into a plurality of image data based on a preset period, and then substitutes the image data sets into a preset independent space to perform edge detection one by one so as to judge whether pixel values in the image data are matched with each other, so as to execute corresponding steps; for example, when the pixel values in the image data can be matched with each other, that is, when the time period pictures shot by the intelligent watch can be mutually connected, no condition of blurring of definition or frame dropping of shot pictures exists; for example, when the pixel values in the image data cannot be matched with each other, i.e. the time period frames shot by the smart watch cannot be mutually linked, the execution module 20 classifies the image data at this time to obtain image data capable of being matched and unmatched image data, identifies each pixel value by acquiring a plurality of pixel values in the unmatched image data to obtain at least one pixel point in the pixel values, performs differential comparison on the image resolution and the image data capable of being matched according to the image resolution of which the pixel values are correspondingly presented, and obtains the number of missing pixel points of the pixel values in the unmatched image data from the differential comparison result of the image resolution, so as to generate a missing pixel region which is required to be filled and corresponds to the unmatched image data; the generating module 30 constructs a filling queue of each missing pixel point based on the position of each missing pixel point and the boundary distance mapping value of the edge pixel point after capturing the position of at least one missing pixel point in the missing pixel area after confirming the adjacent span between each pixel point in the missing pixel area, and then the system carries out iterative filling on each missing pixel point in the missing pixel area according to the filling queue so that the pixels of the non-matching image data and the pixels of the matching image data can be consistent to generate color difference non-matching image data of colors to be filled; the filling module 40 constructs a second corresponding relation between the pixel proportion and the color factor proportion in the color difference non-matching image data based on a first corresponding relation between the pixel proportion and the color factor proportion pre-established in the matching image data, after the first corresponding relation is subjected to differential comparison with the second corresponding relation, the color factor difference between the color difference non-matching image data and the matching image data can be known, the non-overlapping part of the color factors of the first corresponding relation and the second corresponding relation is used as a residual color filling path of the non-matching image data, and the residual color filling path performs color filling on the non-matching image data of the color difference to generate image data corresponding to the processed color difference non-matching image data.
It should be noted that, the pixel filling is required to be filled into a plurality of pixel points in the pixel value area, and the specific number of the pixel points needing to be filled from outside to inside in the whole pixel value area can be obtained through the boundary distance mapping value of the edge pixel points, and the pixel points are filled from outside to inside one by one through iterative filling, so that the occurrence of error leakage in the filling process can be avoided, and the integrity of the image data after the pixel filling can be ensured.
In this embodiment, the filling module further includes:
a recording unit, configured to coordinate-encode each pixel point of the matched image data based on a coordinate point set in a preset color space, and record corresponding coordinates and corresponding color values of each pixel point in the color space, where the corresponding coordinates include an x-point coordinate coefficient and a y-point coordinate coefficient, and the corresponding color values include red color of three primary colors, green color of three primary colors, and blue color of three primary colors;
and the generating unit is used for acquiring the regional color of the image contour according to the image contour constructed by each pixel point, sequentially giving colors to each adjacent regional color based on the base point center of the color space, and generating a first corresponding relation between each pixel point and the color factor.
In this embodiment, the system performs coordinate encoding on each data point in the matched image data based on a preset color space coordinate point set by using each pixel point of the matched image data, generates a coordinate coefficient corresponding to each pixel point and a color value corresponding to the coordinate coefficient in the color space coordinate point set, constructs a generated image contour from the coordinate point set according to each pixel point, obtains a region color to be filled in the image contour, and performs color assignment to each adjacent region color in sequence based on a base point center of the color space, so that a first correspondence relationship between the pixel point and a color factor in the matched image data can be generated;
it should be noted that, the process of color assignment is sequentially performed on the colors of each adjacent region based on the base point center of the color space, specifically, color assignment is sequentially performed by (+/-1, ±1) - (±2 ) … … (+/-x, ±y) until all the colors of the regions are completely filled, and the length of the color assignment process is based on how many colors of the regions to be filled exist in the color space.
In this embodiment, the generating module further includes:
a second generating unit, configured to generate a filling value of each missing pixel point based on the boundary distance mapping value;
A judging unit, configured to judge whether the filling value is greater than the boundary distance mapping value;
and the execution unit is used for generating filling values of the missing pixel points if not, marking the filling values of the missing pixel points according to the filling queues, filling the missing pixel points based on preset priorities, and obtaining non-matching image data with different colors by filling, wherein the preset priorities are specifically priorities with filling values from high to low.
In this embodiment, the system generates values to be filled for each missing pixel point based on the boundary distance mapping values of each missing pixel point position and the edge pixel point, and determines whether the values to be filled are greater than the boundary distance mapping values, so as to execute corresponding steps; for example, when the to-be-filled value of a certain missing pixel point is greater than the boundary distance mapping value, if the missing pixel point is continuously filled, the pixel overlapping condition occurs in the photographed image, and the pixel overlapping degree increases along with the overflow of the filling value beyond the boundary distance mapping value, so that when the to-be-filled value of the pixel point is about to be greater than the boundary distance mapping value, the filling value of the pixel point is controlled to be equal to the boundary distance mapping value, and the occurrence of pixel overlapping can be avoided; for example, when the to-be-filled value of the missing pixel points is not greater than the boundary distance mapping value, the system generates the corresponding filled value of each missing pixel point, marks the corresponding filled value of each missing pixel point based on the filling queue, fills each missing pixel point based on a preset priority, and can obtain non-matching image data with color difference after filling.
It should be noted that, marking each missing pixel point according to the filling queue is because the filling value of each missing pixel point is different, so that marking is needed to avoid error filling; the reason why the priority filling is set for each missing pixel point is that a plurality of missing pixel points to be filled possibly exist in the same horizontal axis or the same vertical axis at the same time, and the missing pixel points of the same horizontal axis or the same vertical axis are filled in order according to the pixel filling quantity from high to low, so that the situation of incorrect filling when the missing pixel points of the same horizontal axis or the same vertical axis are too many can be avoided.
In this embodiment, the execution module further includes:
an obtaining unit, configured to obtain the number of pixel points corresponding to a single pixel value in non-matching image data;
the second judging unit is used for judging whether the number of the pixel points is the same as the number of the pixel points of the matched image data;
and the second execution unit is used for acquiring pixel difference values between the number of the pixel points and the number of the pixel points if not, scanning missing pixel point coordinates corresponding to the pixel difference values based on a preset pixel coordinate set, identifying specific orientations corresponding to the missing pixel points in the pixel coordinate set, connecting the specific orientations, and establishing missing pixel areas which are required to be filled and correspond to the non-matching image data.
In this embodiment, the system determines, by acquiring the number of pixels corresponding to a single pixel value in the non-matching image data, whether the number of pixels is the same as the number of pixels of the matching image data, so as to execute the corresponding step; for example, when the number of pixels can be the same as the number of pixels of the matching image data, that is, the missing pixel region representing the filling required in the non-matching image data at this time is filled; for example, when the number of pixels cannot be the same as that of the pixels of the matched image data, the system collects pixel difference values between the number of pixels and the number of pixels, scans missing pixel coordinates corresponding to the pixel difference values in a preset pixel coordinate set of the non-matched image data to identify a specific position of the missing pixel in the pixel coordinate set, and connects the missing pixels in the pixel coordinate set to establish a missing pixel area to be filled corresponding to the non-matched image data.
In this embodiment, the execution module further includes:
A second acquisition unit configured to acquire a number of pixel points present per inch of area in the non-matching image data;
the third judging unit is used for judging whether the pixel number is matched with a preset pixel number sequence;
and the third execution unit is used for generating a difference sequence corresponding to the pixel points, acquiring missing pixel points in the difference sequence based on the pixel point sequence, and identifying the horizontal direction corresponding to the missing pixel points in the difference sequence to obtain a sequence difference value of the missing pixel points, wherein the horizontal direction comprises a horizontal axis and a vertical axis.
In this embodiment, the system determines whether the pixel points match a preset sequence of pixel points by acquiring the pixel points existing in each inch of area in the non-matching image data, so as to execute the corresponding steps; for example, the number of pixels in each inch of area in the non-matching image data is 2468, the preset pixel number sequence is 1600×900, the system determines that the number of pixels cannot match the preset pixel number sequence, the system generates a difference sequence 1582×880 of the non-matching image data, compares the difference sequence 1582×880 with the preset pixel number sequence 1600×900 to obtain missing pixel numbers 18×20 in the difference sequence, and identifies the horizontal axis of the missing pixel numbers in the difference sequence to obtain sequence difference values of the number of missing pixel numbers in the horizontal axis as a horizontal axis 18 and a vertical axis 20;
The number of pixels displayed in the image data is 1600×900, which means that the number of pixels in the horizontal axis is 1600 and the number of pixels in the vertical axis is 900, and the higher the resolution, the finer and finer the image display effect obtained when the screen size of the photographed image is the same.
In this embodiment, further comprising:
the extraction module is used for extracting texture features of the processed image data to obtain at least one texture feature of the processed image data;
the fusion module is used for carrying out superposition fusion on the at least one texture feature and the matched image data to obtain a fusion error coefficient corresponding to the superposition fusion;
the second judging module is used for judging whether the fusion error coefficient reaches a preset matching coefficient or not;
and the second execution module is used for generating a difference value corresponding to the fusion error coefficient and the preset matching coefficient if not, and continuing to carry out superposition fusion on the at least one texture feature and the matching image data according to a preset increment until the fusion error coefficient corresponding to superposition fusion reaches the preset matching coefficient, and stopping superposition fusion to obtain a superposition fusion image data set.
In this embodiment, the system performs texture feature extraction on the processed image data to obtain at least one texture feature in the processed image data, and then performs superposition fusion on the texture features and the matched image data to generate a fusion error coefficient generated by superposition fusion, and determines whether the fusion error coefficient reaches a preset matching coefficient so as to execute a corresponding step; for example, when a texture feature is adopted to perform superposition fusion with the matched image data, the fusion error coefficient reached is 55%, and the preset matching coefficient is 58.5%, then the system generates a difference value 3.5% between the fusion error coefficient and the preset matching coefficient, and the superposition fusion is continued along with the texture feature, so that the obtained fusion error coefficient is 56.3%, and the preset matching coefficient 58.5% can be reached, namely, the superposition fusion process is stopped, and then the image dataset after superposition fusion is generated;
it should be noted that, the coefficient of fusion error coefficient and the coefficient reached by the preset matching coefficient are set within 3%, the preset increment is increased based on the increase of the number of texture features for initial superposition and fusion, if the fusion error coefficient after the initial superposition and fusion of two texture features and the matching image data cannot reach the preset matching coefficient, at this time, at least two or more texture features are needed to be continuously subjected to superposition and fusion.
In this embodiment, the judging module further includes:
the identification unit is used for identifying image features in the image data, and acquiring the image features based on the occupation proportion in the image data, wherein the image features comprise color factors and texture factors;
and the extraction unit is used for extracting inferior image data with the occupation ratio lower than a preset average level from the image data, and taking the inferior image data as a detection object of the edge detection.
In this embodiment, the system obtains the occupation ratio of the image features in the image data by identifying the image features in each image data, specifically including the occupation ratio of the color factors and the texture factors, and then the system extracts the image data with the occupation ratio lower than the preset average level in each image data, determines these image data lower than the preset average level as inferior image data, and uses the inferior image data as the detection object to be subjected to the edge detection.

Claims (10)

1. The processing method for generating the image data based on the intelligent watch is characterized by comprising the following steps of:
collecting an image data set by adopting an image sensor, dividing the image data set into a plurality of image data based on a preset period, substituting the plurality of image data into a preset independent space, performing edge detection on the plurality of image data, and judging whether pixel values of the plurality of image data are matched;
If not, classifying to obtain matching image data and non-matching image data, acquiring a plurality of pixel values in the non-matching image data, identifying at least one pixel point in the plurality of pixel values, presenting image resolution corresponding to the plurality of pixel values according to the at least one pixel point, performing differential comparison on the image resolution and the matching image data, and generating a missing pixel area which is required to be filled and corresponds to the non-matching image data based on at least the number of missing pixel points in each pixel value;
for edge pixel points in the missing pixel region, confirming adjacent spans among all pixel points in the missing pixel region, capturing at least one missing pixel point position in the missing pixel region, constructing filling queues of all missing pixel points based on boundary distance mapping values of all missing pixel point positions and the edge pixel points, and carrying out iterative filling on all missing pixel points according to the filling queues to generate non-matching image data with color differences;
based on a first corresponding relation between pixel points and color factors which are pre-established in the matched image data, constructing a second corresponding relation between the pixel points and the color factors in the non-matched image data with color differences, comparing the first corresponding relation with the second corresponding relation in a different mode to obtain a non-overlapping part of the color factors of the first corresponding relation and the second corresponding relation, taking the non-overlapping part as a residual color filling path of the non-matched image data, and performing color filling on the non-matched image data with color differences according to the residual color filling path to generate processed image data.
2. The method for generating image data based on a smart watch according to claim 1, wherein the step of generating the image data based on the first correspondence between the pixels and the color factors pre-established in the matching image data comprises:
carrying out coordinate coding on each pixel point of the matched image data based on a coordinate point set in a preset color space, and recording corresponding coordinates and corresponding color values of each pixel point based on the color space, wherein the corresponding coordinates comprise x-point coordinate coefficients and y-point coordinate coefficients, and the corresponding color values comprise red colors of three primary colors, green colors of three primary colors and blue colors of the three primary colors;
and acquiring regional colors of the image contour according to the image contour constructed by each pixel point, sequentially giving colors to each adjacent regional color based on the base point center of the color space, and generating a first corresponding relation between each pixel point and the color factor.
3. The method for generating image data based on a smart watch according to claim 1, wherein the step of iteratively filling the missing pixels according to the filling queue to generate non-matching image data with color differences comprises:
Generating filling values of the missing pixel points based on the boundary distance mapping values;
judging whether the filling value is larger than the boundary distance mapping value or not;
if not, generating filling values of the missing pixel points, marking the filling values of the missing pixel points according to the filling queues, and filling the missing pixel points based on preset priorities, so as to obtain non-matching image data with different colors, wherein the preset priorities are specifically priorities with filling values from high to low.
4. The method for generating image data based on a smart watch according to claim 1, wherein the step of generating the non-matching image data corresponding to the missing pixel region to be filled based on at least the number of missing pixels in each pixel value comprises:
acquiring the number of the pixel points corresponding to a single pixel value in non-matching image data;
judging whether the number of the pixel points is the same as the number of the pixel points of the matched image data;
if not, collecting pixel difference values between the number of the pixel points and the number of the pixel points, scanning missing pixel point coordinates corresponding to the pixel difference values based on a preset pixel coordinate set, identifying specific orientations corresponding to the missing pixel points in the pixel coordinate set, connecting the specific orientations, and establishing missing pixel areas which are required to be filled and correspond to the non-matching image data.
5. The method for generating image data based on a smart watch according to claim 1, wherein the step of differentially comparing the image resolution with the matching image data according to the image resolution corresponding to the pixel values presented by the at least one pixel includes:
acquiring the number of pixel points existing in each inch of area in the non-matching image data;
judging whether the pixel number is matched with a preset pixel number sequence or not;
if not, generating a difference sequence corresponding to the pixel points, acquiring missing pixel points in the difference sequence based on the pixel point sequence, and identifying the horizontal direction corresponding to the missing pixel points in the difference sequence to obtain a sequence difference value of the missing pixel points, wherein the horizontal direction comprises a horizontal axis and a vertical axis.
6. The method for generating image data based on a smart watch according to claim 1, wherein the step of performing color filling on the non-matching image data with the color difference according to the color filling path, after generating the processed image data, further comprises:
Extracting texture features of the processed image data to obtain at least one texture feature of the processed image data;
overlapping and fusing the at least one texture feature and the matched image data to obtain a fusion error coefficient corresponding to the overlapping and fusing;
judging whether the fusion error coefficient reaches a preset matching coefficient or not;
if not, generating a difference value corresponding to the fusion error coefficient and the preset matching coefficient, and continuing to perform superposition fusion on the at least one texture feature and the matching image data according to a preset increment until the fusion error coefficient corresponding to superposition fusion reaches the preset matching coefficient, stopping superposition fusion, and obtaining a superposition fusion image data set.
7. The method for generating image data based on a smart watch according to claim 1, wherein the step of substituting the plurality of image data into a preset independent space and performing edge detection on the plurality of image data comprises:
identifying image features in the image data, and acquiring the image features based on occupation ratios in the image data, wherein the image features comprise color factors and texture factors;
And extracting inferior image data with the occupation proportion lower than a preset average level from the image data, and taking the inferior image data as a detection object of the edge detection.
8. A processing system for generating image data based on a smart watch, comprising:
the judging module is used for acquiring an image data set by adopting an image sensor, dividing the image data set into a plurality of image data based on a preset period, substituting the plurality of image data into a preset independent space, carrying out edge detection on the plurality of image data, and judging whether pixel values of the plurality of image data are matched;
the execution module is used for classifying and obtaining matching image data and non-matching image data if not, acquiring a plurality of pixel values in the non-matching image data, identifying at least one pixel point in the plurality of pixel values, presenting image resolution corresponding to the plurality of pixel values according to the at least one pixel point, comparing the image resolution with the matching image data in a different way, and generating a missing pixel area which is required to be filled and corresponds to the non-matching image data based on at least the number of missing pixel points in each pixel value;
The generating module is used for confirming adjacent spans among all pixel points in the missing pixel region aiming at the edge pixel points in the missing pixel region, capturing at least one missing pixel point position in the missing pixel region, constructing filling queues of all missing pixel points based on boundary distance mapping values of all missing pixel point positions and the edge pixel points, and carrying out iterative filling on all missing pixel points according to the filling queues to generate unmatched image data with color differences;
and the filling module is used for constructing a second corresponding relation between the pixel points and the color factors in the non-matching image data with the color difference based on a first corresponding relation between the pixel points and the color factors which are pre-established in the matching image data, comparing the first corresponding relation with the second corresponding relation in a different way to obtain a non-overlapping part of the color factors of the first corresponding relation and the second corresponding relation, taking the non-overlapping part as a residual color filling path of the non-matching image data, and performing color filling on the non-matching image data with the color difference according to the residual color filling path to generate processed image data.
9. The smart watch-based image data generation processing system of claim 8, wherein the population module further comprises:
a recording unit, configured to coordinate-encode each pixel point of the matched image data based on a coordinate point set in a preset color space, and record corresponding coordinates and corresponding color values of each pixel point in the color space, where the corresponding coordinates include an x-point coordinate coefficient and a y-point coordinate coefficient, and the corresponding color values include red color of three primary colors, green color of three primary colors, and blue color of three primary colors;
and the generating unit is used for acquiring the regional color of the image contour according to the image contour constructed by each pixel point, sequentially giving colors to each adjacent regional color based on the base point center of the color space, and generating a first corresponding relation between each pixel point and the color factor.
10. The smart watch-based processing system of claim 8, wherein the generating module further comprises:
a second generating unit, configured to generate a filling value of each missing pixel point based on the boundary distance mapping value;
A judging unit, configured to judge whether the filling value is greater than the boundary distance mapping value;
and the execution unit is used for generating filling values of the missing pixel points if not, marking the filling values of the missing pixel points according to the filling queues, filling the missing pixel points based on preset priorities, and obtaining non-matching image data with different colors by filling, wherein the preset priorities are specifically priorities with filling values from high to low.
CN202310190279.3A 2023-03-02 2023-03-02 Processing method and system for generating image data based on intelligent watch Active CN116051681B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310190279.3A CN116051681B (en) 2023-03-02 2023-03-02 Processing method and system for generating image data based on intelligent watch

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310190279.3A CN116051681B (en) 2023-03-02 2023-03-02 Processing method and system for generating image data based on intelligent watch

Publications (2)

Publication Number Publication Date
CN116051681A true CN116051681A (en) 2023-05-02
CN116051681B CN116051681B (en) 2023-06-09

Family

ID=86121996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310190279.3A Active CN116051681B (en) 2023-03-02 2023-03-02 Processing method and system for generating image data based on intelligent watch

Country Status (1)

Country Link
CN (1) CN116051681B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116863861A (en) * 2023-09-05 2023-10-10 欣瑞华微电子(上海)有限公司 Image processing method and device based on non-explicit point judgment and readable storage medium

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009026534A1 (en) * 2007-08-23 2009-02-26 Verasonics, Inc. Adaptive ultrasound image reconstruction based on sensing of local media motion
JP2010182330A (en) * 2010-04-05 2010-08-19 Omron Corp Method for processing color image and image processor
US20110209044A1 (en) * 2010-02-25 2011-08-25 Sharp Kabushiki Kaisha Document image generating apparatus, document image generating method and computer program
DE102011086456A1 (en) * 2011-11-16 2013-05-16 Siemens Aktiengesellschaft Reconstruction of image data
US20130135298A1 (en) * 2011-11-30 2013-05-30 Panasonic Corporation Apparatus and method for generating new viewpoint image
US8472684B1 (en) * 2010-06-09 2013-06-25 Icad, Inc. Systems and methods for generating fused medical images from multi-parametric, magnetic resonance image data
GB201721811D0 (en) * 2017-12-22 2018-02-07 Novarum Dx Ltd Analysis of a captured image to determine a test outcome
CN110246100A (en) * 2019-06-11 2019-09-17 山东师范大学 A kind of image repair method and system based on angle perception Block- matching
US20200057907A1 (en) * 2018-08-14 2020-02-20 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
WO2020103110A1 (en) * 2018-11-22 2020-05-28 深圳市大疆创新科技有限公司 Image boundary acquisition method and device based on point cloud map and aircraft
CN111461989A (en) * 2020-04-02 2020-07-28 深圳普捷利科技有限公司 Vehicle-mounted image pixel adjusting method, device, equipment and readable storage medium
GB202104113D0 (en) * 2021-03-24 2021-05-05 Sony Interactive Entertainment Inc Image rendering method and apparatus
WO2021213508A1 (en) * 2020-04-24 2021-10-28 安翰科技(武汉)股份有限公司 Capsule endoscopic image stitching method, electronic device, and readable storage medium
CN113673536A (en) * 2021-07-09 2021-11-19 浪潮金融信息技术有限公司 Image color extraction method, system and medium
WO2021243895A1 (en) * 2020-06-02 2021-12-09 苏州科瓴精密机械科技有限公司 Image-based working position identification method and system, robot, and storage medium
US20220051396A1 (en) * 2020-08-11 2022-02-17 Zebra Medical Vision Ltd. Cross modality training of machine learning models
WO2022042436A1 (en) * 2020-08-27 2022-03-03 腾讯科技(深圳)有限公司 Image rendering method and apparatus, and electronic device and storage medium
US20220197893A1 (en) * 2020-12-23 2022-06-23 Here Global B.V. Aerial vehicle and edge device collaboration for visual positioning image database management and updating

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009026534A1 (en) * 2007-08-23 2009-02-26 Verasonics, Inc. Adaptive ultrasound image reconstruction based on sensing of local media motion
US20110209044A1 (en) * 2010-02-25 2011-08-25 Sharp Kabushiki Kaisha Document image generating apparatus, document image generating method and computer program
JP2010182330A (en) * 2010-04-05 2010-08-19 Omron Corp Method for processing color image and image processor
US8472684B1 (en) * 2010-06-09 2013-06-25 Icad, Inc. Systems and methods for generating fused medical images from multi-parametric, magnetic resonance image data
DE102011086456A1 (en) * 2011-11-16 2013-05-16 Siemens Aktiengesellschaft Reconstruction of image data
US20130135298A1 (en) * 2011-11-30 2013-05-30 Panasonic Corporation Apparatus and method for generating new viewpoint image
GB201721811D0 (en) * 2017-12-22 2018-02-07 Novarum Dx Ltd Analysis of a captured image to determine a test outcome
US20200057907A1 (en) * 2018-08-14 2020-02-20 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
WO2020103110A1 (en) * 2018-11-22 2020-05-28 深圳市大疆创新科技有限公司 Image boundary acquisition method and device based on point cloud map and aircraft
CN110246100A (en) * 2019-06-11 2019-09-17 山东师范大学 A kind of image repair method and system based on angle perception Block- matching
CN111461989A (en) * 2020-04-02 2020-07-28 深圳普捷利科技有限公司 Vehicle-mounted image pixel adjusting method, device, equipment and readable storage medium
WO2021213508A1 (en) * 2020-04-24 2021-10-28 安翰科技(武汉)股份有限公司 Capsule endoscopic image stitching method, electronic device, and readable storage medium
WO2021243895A1 (en) * 2020-06-02 2021-12-09 苏州科瓴精密机械科技有限公司 Image-based working position identification method and system, robot, and storage medium
US20220051396A1 (en) * 2020-08-11 2022-02-17 Zebra Medical Vision Ltd. Cross modality training of machine learning models
WO2022042436A1 (en) * 2020-08-27 2022-03-03 腾讯科技(深圳)有限公司 Image rendering method and apparatus, and electronic device and storage medium
US20220197893A1 (en) * 2020-12-23 2022-06-23 Here Global B.V. Aerial vehicle and edge device collaboration for visual positioning image database management and updating
GB202104113D0 (en) * 2021-03-24 2021-05-05 Sony Interactive Entertainment Inc Image rendering method and apparatus
CN113673536A (en) * 2021-07-09 2021-11-19 浪潮金融信息技术有限公司 Image color extraction method, system and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MOD A VA M等: "Coastline extraction from SAR images using spatial fuzzy clustering and the active contour method", INTERNATIONAL JOURNAL OF REMOTE SENSING, pages 355 *
赵建超;尹新富;: "多传感器大差异像素图像融合方法研究与仿真", 计算机仿真, no. 08, pages 258 - 260 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116863861A (en) * 2023-09-05 2023-10-10 欣瑞华微电子(上海)有限公司 Image processing method and device based on non-explicit point judgment and readable storage medium
CN116863861B (en) * 2023-09-05 2023-11-24 欣瑞华微电子(上海)有限公司 Image processing method and device based on non-explicit point judgment and readable storage medium

Also Published As

Publication number Publication date
CN116051681B (en) 2023-06-09

Similar Documents

Publication Publication Date Title
US7362918B2 (en) System and method for de-noising multiple copies of a signal
US6912313B2 (en) Image background replacement method
US7894633B1 (en) Image conversion and encoding techniques
US8508580B2 (en) Methods, systems, and computer-readable storage media for creating three-dimensional (3D) images of a scene
US7668404B2 (en) Method and system of deskewing an image using monochrome conversion to separate foreground from background
US5841899A (en) Specific color field recognition apparatus and method
CN109495729B (en) Projection picture correction method and system
US9332247B2 (en) Image processing device, non-transitory computer readable recording medium, and image processing method
CN116051681B (en) Processing method and system for generating image data based on intelligent watch
CN111586273B (en) Electronic device and image acquisition method
JP2007067847A (en) Image processing method and apparatus, digital camera apparatus, and recording medium recorded with image processing program
US10074209B2 (en) Method for processing a current image of an image sequence, and corresponding computer program and processing device
CN115035147A (en) Matting method, device and system based on virtual shooting and image fusion method
US7003160B2 (en) Image processing apparatus, image processing method, and computer readable recording medium recording image processing program for processing image obtained by picking up or reading original
CN108833874B (en) Panoramic image color correction method for automobile data recorder
CN111192227A (en) Fusion processing method for overlapped pictures
JP2004519048A (en) Method and apparatus for improving object boundaries extracted from stereoscopic images
CN113239806B (en) Curtain wall plate identification method and system based on image identification
CN115546141A (en) Small sample Mini LED defect detection method and system based on multi-dimensional measurement
CN113516621A (en) Liquid detection method, device, equipment and storage medium
JP5131399B2 (en) Image processing apparatus, image processing method, and program
CN117376718B (en) Real-time color adjustment method and system based on camera output signals
CN114239635B (en) DOI image graffiti processing method, device and equipment
CN114001671B (en) Laser data extraction method, data processing method and three-dimensional scanning system
WO2024134935A1 (en) Three-dimensional information correction device and three-dimensional information correction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant