CN108986036B - Method and terminal for processing files - Google Patents

Method and terminal for processing files Download PDF

Info

Publication number
CN108986036B
CN108986036B CN201710404417.8A CN201710404417A CN108986036B CN 108986036 B CN108986036 B CN 108986036B CN 201710404417 A CN201710404417 A CN 201710404417A CN 108986036 B CN108986036 B CN 108986036B
Authority
CN
China
Prior art keywords
layer
layers
image data
color
pixel points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710404417.8A
Other languages
Chinese (zh)
Other versions
CN108986036A (en
Inventor
赵冬晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201710404417.8A priority Critical patent/CN108986036B/en
Priority to PCT/CN2018/078154 priority patent/WO2018219005A1/en
Publication of CN108986036A publication Critical patent/CN108986036A/en
Application granted granted Critical
Publication of CN108986036B publication Critical patent/CN108986036B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The invention provides a method and a terminal for processing files, wherein the method comprises the following steps: acquiring image data of a file to be processed through first light to obtain color values of pixel points in first image data of the file to be processed and first coordinate information of the pixel points of the first image data in the first image data; acquiring image data of the file to be processed through second light to obtain first coordinate information of pixel points in the second image data of the file to be processed; and adjusting the pixel points in the second image data corresponding to the coordinate information of the pixel points in the first image data according to the color values of the pixel points in the first image data. By the method and the device, the problem that image pixel loss obtained by scanning or photographing the file in the related technology is serious is solved, and the experience effect of a user is improved.

Description

Method and terminal for processing files
Technical Field
The invention relates to the field of file processing, in particular to a method and a terminal for processing a file.
Background
With the increasing functions of terminals, people can use the terminals in many aspects of daily life, so that the life becomes fast and convenient, wherein the scanning, photographing, identification and the like of documents are important items by utilizing software such as evernote, business card scanning and the like. In the process of using the software, documents such as books or business card tables are often scanned and photographed.
In the related art, a color photograph is obtained in a scanning or photographing process, and then is directly stored or subjected to gray scale processing, so that the color photograph is processed into a black and white gray scale photograph. For the above-described manner in the related art, if there is a comment in the paper document that is scanned or photographed, a trace of scribbling; in the process of black-white binarization, pixel loss is serious, the edge loss of the finally obtained photo and character is large, the character is thin, and lost strokes often occur to handwritten characters. In addition, the printed form and the handwritten part cannot be well distinguished, and for some scenes needing character recognition, the mixing of handwritten characters can have great influence on the character recognition.
For the above problem in the related art that the loss of the image pixels obtained by scanning or photographing the document is relatively serious, no effective solution exists at present.
Disclosure of Invention
The embodiment of the invention provides a method and a terminal for processing a file, which are used for at least solving the problem that the loss of image pixels obtained by scanning or photographing the file is relatively serious in the related art.
According to an aspect of the present invention, there is provided a method of processing a file, including: acquiring image data of a file to be processed through first light to obtain color values of pixel points in first image data of the file to be processed and first coordinate information of the pixel points of the first image data in the first image data; acquiring image data of the file to be processed through second light to obtain first coordinate information of pixel points in second image data of the file to be processed; and adjusting the pixel points in the second image data corresponding to the coordinate information of the pixel points in the first image data according to the color values of the pixel points in the first image data.
Further, the adjusting, according to the color value of the pixel point in the first image data, the pixel point in the second image data corresponding to the coordinate information of the pixel point in the first image data includes: grouping pixel points in the first image data according to a plurality of preset color value intervals to obtain a plurality of grouped layers, wherein the color values of the pixel points in the same layer are in the same color value interval; and adjusting color values of pixel points in the second image data corresponding to the coordinate information of the pixel points in the one or more image layers according to the color values of the pixel points in the one or more image layers, and/or adjusting positions of the pixel points in the second image data corresponding to the coordinate information of the pixel points in the one or more image layers.
Further, the adjusting, according to the color values of the pixel points in the one or more image layers, the color values of the pixel points in the second image data corresponding to the coordinate information of the pixel points in the one or more image layers includes: adjusting color values of pixel points corresponding to other coordinate values in the second image data except the coordinate value corresponding to the first image layer in the plurality of image layers to be consistent with the color value of the first image layer in the plurality of image layers; and the first image layer is a bottom color image layer of the file to be processed.
Further, the adjusting, according to the color values of the pixel points in the one or more image layers, the color values of the pixel points in the second image data corresponding to the coordinate information of the pixel points in the one or more image layers includes: adjusting color values of pixel points corresponding to other coordinate values in the second image data except for the coordinate value corresponding to the second layer in the plurality of layers to be consistent with the color value of the second layer in the plurality of layers; the second layer is the other layers except the bottom color layer in the plurality of layers, and the second layer comprises one or more sub-image layers.
Further, when the second layer includes a plurality of sub-layers, after adjusting the color value of a pixel point corresponding to a coordinate value other than the coordinate value corresponding to the second layer in the plurality of layers in the second image data to be consistent with the color value of the second layer in the plurality of layers, adjusting the color value of a part of sub-layers in the plurality of sub-layers consistent with the color value of the second layer in the second image data to be consistent with the color value of the first layer; the first image layer is a bottom color image layer of the file to be processed; and/or after adjusting color values of pixel points corresponding to other coordinate values in the second image data except for the coordinate values corresponding to the second layer in the plurality of layers to be consistent with the color values of the second layer in the plurality of layers, adjusting color values of a part of sub-map layers in the plurality of sub-map layers consistent with the color values of the second layer in the second image data to be consistent with the color value of one of the plurality of sub-map layers.
Further, the adjusting, according to the color values of the pixel points in the one or more image layers, the position of the pixel point in the second image data corresponding to the coordinate information of the pixel point in the one or more image layers includes: acquiring coordinate information of a second layer in the first image data in the first coordinate information; moving the position of a pixel point corresponding to the coordinate information of the second image layer in the second image data; the second layer is the other layers except the bottom color layer in the plurality of layers, and the second layer comprises one or more sub-image layers.
Further, the first light is invisible light, and the second light is visible light.
According to another aspect of the present invention, there is provided a terminal including: a fill light assembly for emitting first and second lights; the camera shooting assembly is used for acquiring image data of the file to be processed through the first light and acquiring the image data of the file to be processed through the second light; the processor is used for obtaining color values of pixel points in first image data of the file to be processed and first coordinate information of the pixel points of the first image data in the first image data according to the image data acquired by the first light; obtaining first coordinate information of pixel points in second image data of the file to be processed according to the image data acquired by the second light; and adjusting the pixel points in the second image data corresponding to the coordinate information of the pixel points in the first image data according to the color values of the pixel points in the first image data.
Further, the processor is further configured to group the pixel points in the first image data according to a plurality of preset color value intervals to obtain a plurality of grouped layers, where color values of the pixel points in the same layer are in the same color value interval; and adjusting color values of pixel points in the second image data corresponding to the coordinate information of the pixel points in the one or more image layers according to the color values of the pixel points in the one or more image layers, and/or adjusting positions of the pixel points in the second image data corresponding to the coordinate information of the pixel points in the one or more image layers.
Further, the processor is further configured to adjust a color value of a pixel point corresponding to a coordinate value of the second image data, except for the coordinate value corresponding to the first layer of the plurality of layers, to be consistent with a color value of the first layer of the plurality of layers; and the first image layer is a bottom color image layer of the file to be processed.
Further, the processor is further configured to adjust a color value of a pixel point corresponding to a coordinate value of the second image data, except for the coordinate value corresponding to the second layer of the plurality of layers, to be consistent with a color value of the second layer of the plurality of layers; the second layer is the other layers except the bottom color layer in the plurality of layers, and the second layer comprises one or more sub-image layers.
Further, in a case that the second image layer includes a plurality of sub-image layers, the processor is further configured to adjust a color value of a pixel point corresponding to a coordinate value of the second image data, except for a coordinate value corresponding to the second image layer of the plurality of image layers, to be consistent with a color value of the second image layer of the plurality of image layers, and then adjust a color value of a partial sub-image layer of the plurality of sub-image layers, which is consistent with the color value of the second image layer, of the second image data to be consistent with the color value of the first image layer; the first image layer is a bottom color image layer of the file to be processed; and/or after adjusting color values of pixel points corresponding to other coordinate values in the second image data except for the coordinate values corresponding to the second layer in the plurality of layers to be consistent with the color values of the second layer in the plurality of layers, adjusting color values of a part of sub-map layers in the plurality of sub-map layers consistent with the color values of the second layer in the second image data to be consistent with the color value of one of the plurality of sub-map layers.
Further, the processor is further configured to obtain coordinate information of a second layer in the first image data in the first coordinate information; moving the position of a pixel point corresponding to the coordinate information of the second image layer in the second image data; the second layer is the other layers except the bottom color layer in the plurality of layers, and the second layer comprises one or more sub-image layers.
Further, the camera assembly includes: the light filtering switching component and the photosensitive component; under the condition that the second light is filtered out by the light filtering component, the photosensitive component acquires image data of the file to be processed through the first light; and under the condition that the first light is filtered out by the light filtering component, the photosensitive component acquires image data of the file to be processed through the second light.
Further, the first light is invisible light, and the second light is visible light.
According to the invention, the image data of the file to be processed is acquired through the first light, so that the color value of the pixel point in the first image data of the file to be processed and the first coordinate information of the pixel point of the first image in the first image data are obtained; acquiring image data of the file to be processed through second light to obtain first coordinate information of pixel points in the second image data of the file to be processed; the pixel points corresponding to the coordinate information of the pixel points in the first image data in the second image data are adjusted according to the color values of the pixel points in the first image data, so that the problem that image pixel loss obtained by scanning or photographing a file in the related technology is serious is solved, and the experience effect of a user is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow diagram of a method of processing a file according to an embodiment of the invention;
FIG. 2 is a schematic diagram of an apparatus for processing a document according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a hardware location architecture according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a camera according to an embodiment of the present invention;
FIG. 5 is a circuit diagram of a filter switching module according to an embodiment of the invention;
fig. 6 is a schematic diagram of a terminal structure according to an embodiment of the present invention;
fig. 7 is a normal distribution diagram formed by invisible light data according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Example 1
In the present embodiment, a method for processing a file is provided, and fig. 1 is a flowchart of a method for processing a file according to an embodiment of the present invention, as shown in fig. 1, the flowchart includes the following steps:
step S102: acquiring image data of a file to be processed through first light to obtain color values of pixel points in first image data of the file to be processed and first coordinate information of the pixel points of the first image data in the first image data;
step S104: acquiring image data of the file to be processed through second light to obtain first coordinate information of pixel points in the second image data of the file to be processed;
step S106: and adjusting the pixel points in the second image data corresponding to the coordinate information of the pixel points in the first image data according to the color values of the pixel points in the first image data.
Through the steps S102 to S106 in this embodiment, the file to be processed is processed according to the first light and the second light respectively to obtain the first image data and the second image data, and then the pixel point corresponding to the coordinate information of the pixel point in the first image data in the second image data is adjusted according to the color value of the pixel point in the first image data.
It should be noted that the execution subject of the method of the present embodiment is preferably an intelligent terminal. In addition, the color values involved in the present embodiment may be gray values, RGB or color values in other formats.
In step S106, a manner of adjusting a pixel point in the second image data corresponding to the coordinate information of the pixel point in the first image data according to the color value of the pixel point in the first image data may further include:
step S106-1: grouping pixel points in the first image data according to a plurality of preset color value intervals to obtain a plurality of grouped layers, wherein the color values of the pixel points in the same layer are in the same color value interval;
step S106-2: and adjusting the color value of the pixel point corresponding to the coordinate information of the pixel point in the one or more image layers in the second image data according to the color value of the pixel point in the one or more image layers, and/or adjusting the position of the pixel point corresponding to the coordinate information of the pixel point in the one or more image layers in the second image data.
The above steps S106-1 and S106-2 are explained in detail by the following embodiments;
(1) obtaining a data array X of a file to be processed through first light, carrying out binarization processing on the array X, and taking a Y value in the array X and a Y value of a certain pixel point; it should be noted that, in digital image processing, one data format is a YUV format, which is also a data format collected by a camera, each element of the array X is a color value, and includes three data, YUV, and the Y processed later is Y in YUV.
When Y > M (threshold value), Y is 255;
when Y < M (threshold value), Y ═ 0;
when the global binarization processing is adopted, the value of the threshold value M can be the average value of the Y values of all the pixel points. In addition, local binarization can be adopted, namely the whole image is divided into N windows according to a certain rule, each window in the N windows is divided into two parts according to a uniform threshold value M, binarization processing is carried out, and a self-adaptive threshold value mode with higher precision can be adopted during local binarization. The threshold of the method is calculated by setting a parameter equation for various local characteristics such as the average value E of the pixels in the window, the square difference P between the pixels, the root mean square value Q between the pixels, and the like, for example: m ═ a × E + b × P + c × Q, where a, b, and c are free parameters. In addition, the confirmation of the M value can also adopt a bimodal method, a P parameter method, an iterative method, an OTSU method and the like.
And recording coordinate values of all points with the Y being 255, wherein the area with the Y being 255 is a ground color area. The coordinate value of the dot of the ground color area is recorded as D. The coordinates of the remaining points are text areas. The coordinate values of the ground color area and the character area are in one-to-one correspondence in the visible light image and the invisible light image and are kept consistent.
(2) Array X takes the array of Y values for all coordinate points. Setting all the Y values of the ground color areas in the array to be 0 to form a corrected array Ye, and performing normal distribution processing by using the value of Ye, wherein the abscissa is the value of Ye and the ordinate is the number of coordinate points.
(3) According to a distribution curve of Ye, several peaks in an image array can be obtained, each peak corresponds to one image layer, a peak point value Y0 with the largest area is selected, a point which meets the condition that Y0-M0 is less than Ye and less than Y0+ M0 (M0 is a threshold value of peak distribution, and a variance of sub-distribution can be taken) is a point of the image layer corresponding to the peak, coordinates of the points are stored and recorded as an image layer 0, and coordinate data are stored in an array W.
(4) And by analogy, the distribution obtains the layer 1 and the layer 2, and when the number of points of a certain layer is smaller than a threshold value Mt, the layer distribution is stopped.
(5) All point coordinates outside the layer N of the layer 1 and the layer 2 … are stored and are respectively recorded as an array Z1, and a data Z2 … array ZN is recorded as an annotation layer.
After obtaining the bottom color layer, the layer 0, the layer 1 and the layer 2 …, the point of each layer corresponds to a corresponding color value. Image data is obtained in which different color values are separated.
After the final visible image data and layer data are obtained, the system can use these data for further processing. The corresponding relationship is as follows:
first light pattern-array X
Bottom color layer-array D
Layer 0-array W
Layer 1-array Z1
Layer 2-array Z2
The case of pen annotation for the most common monochrome printed book users is as follows:
and the bottom color layer-array D, the color corresponding to the visible light is the color of the paper, and the average value of the elements in all the arrays D is taken to obtain the average color of the paper.
Layer 0-array W, layer 0 is the layer corresponding to the color with the largest peak area in the distribution of infrared image array D, for the print document, the most distributed should be the print text portion, layer 0 corresponds to the data of the print portion, and array D is the coordinate position where the print is located.
The correspondence of the layers 1 and 2 is an explanation part in the document, the difference of the Y values corresponding to the normal distribution peak value of each layer is caused by the different brightness of different colors under infrared light imaging, so that one layer corresponds to one color, and the coordinate position distribution is Z1 and Z2.
Based on the above (1) to (5), in an alternative embodiment of the present embodiment, the manner of step S106-2 includes the following alternative embodiments:
the first method is as follows: adjusting color values of pixel points corresponding to other coordinate values in the second image data except the coordinate value corresponding to the first image layer in the plurality of image layers to be consistent with the color value of the first image layer in the plurality of image layers; the first image layer is a bottom color image layer of the file to be processed.
In a specific application scenario, the method may be a document restore: taking an array X, calculating a D color average value D0 of a bottom color layer, modifying color values corresponding to coordinate values of all annotation layers such as Z1, Z2 and the like in the array X into D0, namely filling data of annotation parts such as the layer 1, the layer 2 and the like into the color average value of the bottom layer to obtain an array A0, wherein A0 is the restored document. Further, operations such as character recognition can be performed.
The second method comprises the following steps: adjusting color values of pixel points corresponding to other coordinate values in the second image data except the coordinate value corresponding to the second layer in the plurality of layers to be consistent with the color value of the second layer in the plurality of layers; the second layer is the other layer except the bottom color layer in the plurality of layers, and the second layer comprises one or more sub-image layers.
In a specific application scenario, the way may be annotation color differentiation: coordinate positions Z1 and Z2 corresponding to the annotation layers such as the layer 1 and the layer 2 are coordinate positions corresponding to different color annotations, in X, the average value of color values of all pixel points of the Z1 coordinate point is taken and recorded as color 1, and then the color 1 is the color of the layer 1. The same way obtains the color 2 of layer 2.
Z1-layer 1-color 1
Z2-layer 2-color 2
The third method comprises the following steps: under the condition that the second layer comprises a plurality of sub-layers, after the color values of pixel points corresponding to other coordinate values in the second image data except the coordinate values corresponding to the second layer in the plurality of layers are adjusted to be consistent with the color values of the second layer in the plurality of layers, the color values of part of sub-layers in the plurality of sub-layers consistent with the color values of the second layer in the second image data are adjusted to be consistent with the color values of the first layer; the first image layer is a bottom color image layer of the file to be processed; and/or the presence of a gas in the gas,
after the color values of the pixel points corresponding to the coordinate values except the coordinate values corresponding to the second layer in the plurality of layers in the second image data are adjusted to be consistent with the color values of the second layer in the plurality of layers, the color values of a part of sub-map layers in the plurality of sub-map layers consistent with the color values of the second layer in the second image data are adjusted to be consistent with the color value of one sub-map layer in the plurality of sub-map layers.
In a specific application scenario, the method may be hiding and displaying of the annotation: on the basis of the first mode and the second mode, the hiding and the displaying of the annotations with different colors can be realized by processing the values of the array A0, wherein A0 is an original document, namely a hidden annotated document; in the array a0, the dot color value of the coordinate position corresponding to Z1 is filled as color 1, and the filled array is the image showing the annotation of color 1. By analogy, an image showing an annotation of color 2, and an image with various color combinations of annotations, can be obtained.
The method is as follows: acquiring coordinate information of a second layer in the first image data in the first coordinate information; moving the position of a pixel point corresponding to the coordinate information of the second image layer in the second image data; the second layer is the other layer except the bottom color layer in the plurality of layers, and the second layer comprises one or more sub-image layers.
In a specific application scenario, the mode may be the movement of the annotation: when certain annotations are located within the printed text area, the annotations may be moved to the blank portion.
For example: taking an array W of the layer 0, processing the value of the array W of the coordinate values, taking a minimum value s0 and a maximum value s1 of all abscissas s, and a minimum value t0 and a maximum value t1 of all point ordinates t, wherein a rectangle formed by coordinate points (s0, t0) (s0, t1) (s1, t0) (s1, t1) is an area of the printed body, taking the abscissa sz and the ordinate tz of Z1, Z2 in the annotation layers such as the layer 1 and the layer 2, and the like, and if the requirement is met
s0< sz < s1 and t0< tz < t1
The coordinate is located within the print volume area and the translation process is performed on the coordinate.
sz(e)=sz+Ts
tz(e)=tz+Tt
The horizontal coordinate translation amount Ts and the vertical coordinate translation amount Tt are obtained by calculating all coordinate point sets which belong to the same annotation layer and are adjacent to the coordinate.
In the array a, the color value of the coordinate point of the annotation layer corresponding to the adjusted sz (e) tz (e) is modified into a corresponding color, and the color value of the coordinate point of the original annotation layer is modified into a color value D0 of the background color layer, that is, the movement of the annotation is realized.
In this embodiment, the first light is invisible light, and the second light is visible light. Invisible light, for example infrared light, when penetrating through infrared light directly, different colours and different materials are different to the absorptivity of invisible light, and the light of reflection is just different, and when the different colours of a certain material was shone to infrared light, the colour value that different colour parts imaged under the infrared light just is different, through the differentiation of colour value, just can classify different colours, just also mention the picture layer above that promptly, each picture layer corresponds a colour.
Of course, the invisible light may be other light besides infrared light, as long as the light with different colors and different absorptances of different materials can be realized.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
In this embodiment, a device for processing a file is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, and details are not repeated for what has been described. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 2 is a schematic structural diagram of an apparatus for processing a document according to an embodiment of the present invention, as shown in fig. 2, the apparatus including:
the first collecting module 22 is configured to collect image data of the file to be processed by using the first light, so as to obtain a color value of a pixel point in the first image data of the file to be processed and first coordinate information of the pixel point of the first image in the first image data;
the second acquisition module 24 is coupled and linked with the first acquisition module 22 and is used for acquiring image data of the file to be processed through second light to obtain first coordinate information of pixel points in second image data of the file to be processed;
and the adjusting module 26 is coupled and linked with the second collecting module 24, and is configured to adjust a pixel point in the second image data corresponding to the coordinate information of the pixel point in the first image data according to the color value of the pixel point in the first image data.
It should be noted that, if the apparatus is applied to a terminal with a camera module, the first capture module 22 and the second capture module 24 may be inherited on the camera module.
In a specific application scenario, the camera module may be composed of the first acquisition module 22 and the second acquisition module 24;
wherein the two parts may be: the device comprises a filtering switching module and a photosensitive element. The photosensitive element is a common CCD CMOS and other elements which can be used as a sightseeing function, the filtering switching module is provided with two layers of optical filters which are respectively a visible light optical filter and an infrared optical filter, wherein the visible light optical filter can filter out all non-visible spectrums, the photosensitive element only receives images of visible spectrum parts to obtain visible light images, the infrared optical filter can filter out all non-infrared spectrums, and the photosensitive element only receives images of infrared spectrum parts to obtain infrared images. The filtering switching module is mechanical, one of the visible light filter and the infrared light filter is controlled to be positioned in front of the photosensitive element through an electric signal to perform filtering, the infrared light filter is in a folded state in a normal mode, the photosensitive element collects visible light images, when a document is shot, a user presses a shutter, the filtering switching module firstly keeps the visible light filter to work to collect visible light images, then the visible light filter is folded, the infrared light filter is put down, and infrared images are collected.
Optionally, the adjusting module comprises: the grouping unit is used for grouping the pixel points in the first image data according to a plurality of preset color value intervals to obtain a plurality of grouped layers, wherein the color values of the pixel points in the same layer are in the same color value interval; and the adjusting unit is coupled and linked with the grouping unit and is used for adjusting the color values of the pixel points corresponding to the coordinate information of the pixel points in the one or more image layers in the second image data according to the color values of the pixel points in the one or more image layers and/or adjusting the positions of the pixel points corresponding to the coordinate information of the pixel points in the one or more image layers in the second image data.
For the manner of adjusting the color value of the pixel point and/or the position of the pixel point involved in the adjustment unit, in the embodiment of this embodiment, the following manners may be used to implement the adjustment
The first method is as follows: the adjusting unit is further configured to adjust color values of pixel points corresponding to coordinate values in the second image data, except for coordinate values corresponding to a first layer in the plurality of layers, to be consistent with color values of the first layer in the plurality of layers; the first image layer is a bottom color image layer of the file to be processed.
The method corresponds to the first method in step S106-2 in the first embodiment, that is, the method is a document restoring method in a specific application scenario.
In a second mode, the adjusting unit is further configured to adjust color values of pixel points corresponding to coordinate values in the second image data, except for coordinate values corresponding to a second layer of the plurality of layers, to be consistent with color values of the second layer of the plurality of layers; the second layer is the other layer except the bottom color layer in the plurality of layers, and the second layer comprises one or more sub-image layers.
This approach corresponds to the second approach of step S106-2 in the first embodiment, that is, the approach distinguishes colors of annotations in a specific application scenario.
In a third mode, the adjusting unit is further configured to adjust color values of pixel points corresponding to coordinate values in the second image data, except for coordinate values corresponding to a second layer in the plurality of layers, to be consistent with color values of the second layer in the plurality of layers; the second layer is the other layer except the bottom color layer in the plurality of layers, and the second layer comprises one or more sub-image layers.
This mode corresponds to the third mode of step S106-2 in the first embodiment, that is, this mode is hiding and displaying of annotations in a specific application scenario.
The adjusting unit is further configured to obtain coordinate information of the second layer in the first image data in the first coordinate information; moving the position of a pixel point corresponding to the coordinate information of the second image layer in the second image data; the second layer is the other layer except the bottom color layer in the plurality of layers, and the second layer comprises one or more sub-image layers.
This mode corresponds to the mode four of the step S106-2 in the first embodiment, that is, the mode is the movement of the annotation in a specific application scenario.
In an optional implementation manner of this embodiment, the first light involved in this embodiment may be invisible light, and the second light may be visible light.
It should be noted that embodiment 2 is an embodiment of the apparatus corresponding to embodiment 1, and therefore, the manner in which the modules and units in this embodiment are implemented is consistent with the method steps described above.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Example 3
The present embodiment provides a terminal, as shown in fig. 6, the terminal includes:
a fill light assembly 62 for emitting first light and second light;
the camera shooting assembly 64 is coupled and linked with the light supplement lamp assembly 62 and is used for acquiring image data of the file to be processed through a first light and acquiring image data of the file to be processed through a second light;
the processor 66 is coupled and linked with the camera module 64, and is configured to obtain, according to the image data acquired by the first light, a color value of a pixel point in the first image data of the file to be processed and first coordinate information of the pixel point of the first image in the first image data; obtaining first coordinate information of pixel points in second image data of the file to be processed according to the image data acquired by the second light; and adjusting the pixel points in the second image data corresponding to the coordinate information of the pixel points in the first image data according to the color values of the pixel points in the first image data.
In an optional implementation manner of this embodiment, the processor is further configured to group the pixel points in the first image data according to a plurality of preset color value intervals to obtain a plurality of grouped layers, where color values of the pixel points in the same layer are in the same color value interval; and adjusting color values of pixel points in the second image data corresponding to the coordinate information of the pixel points in the one or more image layers according to the color values of the pixel points in the one or more image layers, and/or adjusting positions of the pixel points in the second image data corresponding to the coordinate information of the pixel points in the one or more image layers.
Based on the above-mentioned way that the processor adjusts the color value of the pixel point corresponding to the coordinate information of the pixel point in the one or more layers in the second image data according to the color value of the pixel point in the one or more layers, and/or adjusts the position of the pixel point corresponding to the coordinate information of the pixel point in the one or more layers in the second image data, in an optional implementation manner of this embodiment, the processor may further specifically implement the above-mentioned way by:
the first method is as follows: the processor is used for adjusting the color values of the pixel points corresponding to the coordinate values in the second image data except the coordinate values corresponding to the first image layer in the plurality of image layers to be consistent with the color values of the first image layer in the plurality of image layers; the first image layer is a bottom color image layer of the file to be processed.
The second method comprises the following steps: the processor is used for adjusting the color values of the pixel points corresponding to the coordinate values in the second image data except the coordinate values corresponding to the second layer in the plurality of layers to be consistent with the color values of the second layer in the plurality of layers; the second layer is the other layer except the bottom color layer in the plurality of layers, and the second layer comprises one or more sub-image layers.
The third method comprises the following steps: when the second layer includes a plurality of sub-layers, the processor is configured to adjust a color value of a pixel point corresponding to a coordinate value of the second image data, except for a coordinate value corresponding to the second layer of the plurality of layers, to be consistent with a color value of the second layer of the plurality of layers, and then adjust a color value of a part of sub-layers of the plurality of sub-layers, which are consistent with the color value of the second layer, of the second image data to be consistent with the color value of the first layer; the first image layer is a bottom color image layer of the file to be processed; and/or the presence of a gas in the gas,
after the color values of the pixel points corresponding to the coordinate values except the coordinate values corresponding to the second layer in the plurality of layers in the second image data are adjusted to be consistent with the color values of the second layer in the plurality of layers, the color values of a part of sub-map layers in the plurality of sub-map layers consistent with the color values of the second layer in the second image data are adjusted to be consistent with the color value of one sub-map layer in the plurality of sub-map layers.
The method is as follows: the processor is used for acquiring coordinate information of a second layer in the first image data in the first coordinate information; moving the position of a pixel point corresponding to the coordinate information of the second image layer in the second image data; the second layer is the other layer except the bottom color layer in the plurality of layers, and the second layer comprises one or more sub-image layers.
Optionally, the camera assembly comprises: the light filtering switching component and the photosensitive component;
under the condition that the second light is filtered by the light filtering component, the photosensitive component acquires image data of the file to be processed through the first light; and under the condition that the first light is filtered by the light filtering component, the photosensitive component acquires image data of the file to be processed through the second light.
In this embodiment, the first light may be invisible light, and the second light may be visible light.
Example 4
The following illustrates embodiments of the invention in connection with alternative embodiments of the invention;
the present alternative embodiment provides an apparatus for processing a document image using invisible light, wherein the hardware part of the apparatus includes: a central processor module (corresponding to the processor in embodiment 3), a camera module (corresponding to the camera assembly in embodiment 3), and a fill-in light module for invisible light (corresponding to the fill-in light module in embodiment 3); the central processing unit module is used for processing image data; the camera module is used for shooting an image data source containing the invisible light spectrum, and can separate the invisible light and the visible light respectively in a hardware or software processing mode and transmit the invisible light and the visible light to the central processing unit for processing. And the invisible light supplement module is used for emitting a supplement lamp of invisible light, is connected with the central processing module through a bus in a control way, and is controlled by the central processing unit to be in an on-off state.
Based on the above hardware module, a description is given of a flow for processing a document image by using invisible light in the present alternative embodiment: firstly, starting a document shooting function, and enabling the equipment to enter a document shooting mode to start a shooting module and an image processing module; secondly, triggering the shooting module to control the turning-on of the invisible light supplementary lamp, further controlling the camera to collect image data by the shooting module, respectively collecting the invisible light part and the visible light part in the image data, and respectively storing the invisible light part and the visible light part in two image formats.
In addition, the image processing module analyzes and processes the invisible light part, and according to the difference of the reflectivity of the invisible light irradiating on different colors, the brightness value of each pixel point in the image can be different, the pixel points with similar color values are classified, the pixel points of the same layer are the pixel points of the same layer, the positions of the pixel points are recorded, and the positions of the pixel points of different layers are stored in different arrays respectively. Corresponding the arrays of the pixel point positions of different layers to the pixel points in the visible light image, respectively extracting the images of different layers, and respectively storing the images; and finally, further processing the images of different image layers according to the requirements of users.
Fig. 3 is a schematic diagram of a hardware position structure according to an embodiment of the present invention, and as shown in fig. 3, the camera module and the invisible light supplement lamp module are located on the same plane, so that the camera module can acquire the invisible light emitted by the invisible light supplement lamp; in addition, the invisible light supplement lamp in the optional embodiment has two light sources, and one light source emits visible light for a flash lamp during photographing; the other light source can emit infrared light for infrared supplementary lighting. Infrared is invisible to the human eye, but for the camera in this embodiment, both the infrared and visible spectrum portions can be collected. And images in the infrared portion of the spectrum and images in the visible portion of the spectrum may be filtered out separately by filters or software.
Fig. 4 is a schematic structural diagram of a camera according to an embodiment of the present invention, as shown in fig. 4, the camera includes two parts, which are a filtering switching module (corresponding to the total filtering switching component in embodiment 3), a photosensitive element (corresponding to the photosensitive component in embodiment 3) and other necessary components, the photosensitive element is a common CCD CMOS and other elements that can be used as a viewing function, the filtering switching module includes two layers of optical filters, which are a visible optical filter and an infrared optical filter, respectively, wherein the visible optical filter can filter all non-visible light spectrums, so that the photosensitive element receives only images of the visible light spectrums to obtain visible light images, and the infrared optical filter can filter all non-infrared spectrums, so that the photosensitive element receives only images of the infrared light spectrums to obtain infrared images. The filtering switching module is mechanical, one of the visible light filter and the infrared light filter is controlled to be positioned in front of the photosensitive element through an electric signal to perform filtering, the infrared light filter is in a folded state in a normal mode, the photosensitive element collects visible light images, when a document is shot, a user presses a shutter, the filtering switching module firstly keeps the visible light filter to work to collect visible light images, then the visible light filter is folded, the infrared light filter is put down, and infrared images are collected.
It should be noted that the camera module may also adopt a software processing mode, without adding a filtering switching module, the signals of all spectra are collected by the photosensitive element, and are filtered by software, wherein the infrared spectrum is used for the wavelength greater than 760nm, and the visible spectrum is used for the wavelength less than 760 nm.
Fig. 5 is a schematic circuit diagram of a filtering switching module according to an embodiment of the present invention, as shown in fig. 5, a visible light filter, an infrared filter, and the switching module are connected through a bracket, a permanent magnet is disposed at the top end of the bracket, and the magnetic poles of the permanent magnets at the top ends of the two filters are opposite, as shown in fig. 5, in a normal photographing mode, the switching module applies a positive voltage + V, and the magnetic pole at the bottom end of the switching module is connected to N, so that the visible light filter bracket is repelled in the same pole, extends downward, and blocks in front of a photosensitive element, and the infrared filter bracket is attracted in the opposite pole, and retracts upward, and at this time, the camera is in a visible light photographing mode. When the camera module receives a system instruction and takes an infrared picture, the switching module applies negative voltage-V, the magnetic pole at the bottom end of the switching module is connected with S, the infrared filter support repels the same pole, extends downwards and is blocked in front of the photosensitive element, the opposite poles of the visible light filter top support attract each other and are folded upwards, and the camera is in an infrared shooting mode at the moment.
It should be noted that the filter switching module may adopt other mechanical devices to control the raising and lowering of the filter by different electrical signals.
The step of processing the document image with the invisible light in the present embodiment with reference to fig. 3 to 5 described above includes:
step S201: the document shooting mode is started, the light supplement lamp enters the infrared emission mode, and the infrared light source of the light supplement lamp is in a working state at the moment. The camera enters a document shooting mode, and forward voltage + V is applied to the filtering switching module.
Step S202: the user shoots, and the camera module controls the light filling lamp to emit infrared rays for light filling, and simultaneously the system sends an instruction to control the camera module to shoot pictures.
And S203, after the camera module receives a system shooting instruction, maintaining the positive voltage + V of the filtering switching module to collect an image, collecting a visible light image, and then adding the negative voltage-V to the filtering switching module to collect an infrared image.
The image format collected by the camera module is RAW format, and is converted into YUV format for further processing, and the YUV format is stored in two array caches respectively, wherein the visible light image data array is A, and the infrared image data array is B.
Step S204: and separating the layers.
(1) Carrying out binarization processing on the visible light data array A, and taking a Y value in the visible light image data array to obtain the Y value of a certain pixel point;
when Y > M (threshold value), Y is 255;
when Y < M (threshold value), Y ═ 0;
when the global binarization processing is adopted, the value of the threshold value M can be the average value of the Y values of all the pixel points. In addition, local binarization can be adopted, namely the whole image is divided into N windows according to a certain rule, each window in the N windows is divided into two parts according to a uniform threshold value M, binarization processing is carried out, and a self-adaptive threshold value mode with higher precision can be adopted during local binarization. The threshold of the method is calculated by setting a parameter equation for various local characteristics such as the average value E of the pixels in the window, the square difference P between the pixels, the root mean square value Q between the pixels, and the like, for example: m ═ a × E + b × P + c × Q, where a, b, and c are free parameters. In addition, the confirmation of the M value can also adopt a bimodal method, a P parameter method, an iterative method, an OTSU method and the like.
And recording coordinate values of all points with the Y being 255, wherein the area with the Y being 255 is a ground color area. The coordinate value of the dot of the ground color area is recorded as D. The coordinates of the remaining points are text areas. The coordinate values of the ground color area and the character area are in one-to-one correspondence in the visible light image and the invisible light image and are kept consistent.
(2) For the invisible light data array B, an array of Y values of all coordinate points is taken. All the Y values in the ground color region in this array are set to 0 to form a corrected array Ye, and normal distribution processing is performed using the Ye values, as shown in fig. 7 below, where the abscissa is the Ye value and the ordinate is the number of coordinate points.
(3) According to a distribution curve of Ye, several peaks in an image array can be obtained, each peak corresponds to one image layer, a peak point value Y0 with the largest area is selected, a point which meets the condition that Y0-M0 is less than Ye and less than Y0+ M0 (M0 is a threshold value of peak distribution, and a variance of sub-distribution can be taken) is a point of the image layer corresponding to the peak, coordinates of the points are stored and recorded as an image layer 0, and coordinate data are stored in an array W.
(4) And by analogy, the distribution obtains the layer 1 and the layer 2, and when the number of points of a certain layer is smaller than a threshold value Mt, the layer distribution is stopped.
(5) All point coordinates outside the layer N of the layer 1 and the layer 2 … are stored and are respectively recorded as an array Z1, and a data Z2 … array ZN is recorded as an annotation layer.
It should be noted that other mathematical algorithms may also be used for the layer separation, such as stripping data layer by using the expected variance value, determining a similar element value by using continuous same data, and then searching for other similar elements according to the value.
Step S205: after obtaining the ground color layer, the layer 0, the layer 1 and the layer 2 …, combining the layer with the visible light image data, and corresponding color values of points of each layer. Image data is obtained in which different color values are separated.
After the final visible image data and layer data are obtained, the system can use these data for further processing. The corresponding relationship is as follows
Visible light image-array A
Infrared image-array B
Bottom color layer-array D
Layer 0-array W
Layer 1-array Z1
Layer 2-array Z2
When utilizing infrared light to penetrate directly, different colours, different materials are different to the absorptivity of invisible light, and the light of reflection is just different, and when the different colours of a certain material was shone to infrared light, the colour value that different colour parts formed images under the infrared light just is different, through the differentiation of colour value, just can classify different colours, just also mention the picture layer above that exactly, each picture layer corresponds a colour.
For the most common case of monochrome printed book users annotating with a pen,
and the bottom color layer-array D, the color corresponding to the visible light is the color of the paper, and the average value of the elements in all the arrays D is taken to obtain the average color of the paper.
Layer 0-array W, layer 0 is the layer corresponding to the color with the largest peak area in the distribution of infrared image array D, for the print document, the most distributed should be the print text portion, layer 0 corresponds to the data of the print portion, and array D is the coordinate position where the print is located.
The image layers 1 and 2 … correspond to each other, namely, the release parts in the document, and the difference of the Y values corresponding to the normal distribution peaks of each image layer is caused by the difference of the brightness of different colors under infrared imaging, so that one image layer corresponds to one color, and the coordinate position distribution is Z1 and Z2 ….
Based on the above steps S201-S205 of this embodiment, the following operations may be performed;
(1) and (3) document restoration: taking a data array A of the visible light image, calculating a D color average value D0 of the bottom color layer, modifying color values corresponding to coordinate values of all annotation layers such as Z1, Z2 and the like in the array A into D0, namely filling data of annotation parts such as the layer 1, the layer 2 and the like into the color average value of the bottom layer to obtain an array A0, wherein A0 is the restored document. Further, operations such as character recognition can be performed.
(2) Annotation color differentiation: coordinate positions Z1 and Z2 corresponding to the annotation layers such as the layer 1 and the layer 2 are coordinate positions corresponding to different color annotations, in A, the average value of color values of all pixel points of the Z1 coordinate point is taken and recorded as color 1, and then the color 1 is the color of the layer 1. The same way obtains the color 2 of layer 2.
Z1-layer 1-color 1
Z2-layer 2-color 2
(3) Hiding and displaying of annotations: on the basis of 1 and 2, hiding and displaying of different color annotations can be realized by processing the values of the array A0, wherein A0 is an original document, namely a hidden annotation document; in the array a0, the dot color value of the coordinate position corresponding to Z1 is filled as color 1, and the filled array is the image showing the annotation of color 1. By analogy, an image showing an annotation of color 2, and an image with various color combinations of annotations, can be obtained.
(4) Movement of the annotation: when certain annotations are located within the printed text area, the annotations may be moved to blank sections
Taking an array W of the layer 0, processing the value of the array W of the coordinate values, taking a minimum value s0 and a maximum value s1 of all abscissas s, and a minimum value t0 and a maximum value t1 of all point ordinates t, wherein a rectangle formed by coordinate points (s0, t0) (s0, t1) (s1, t0) (s1, t1) is an area of the printed body, taking the abscissa sz and the ordinate tz of Z1, Z2 in the annotation layers such as the layer 1 and the layer 2, and the like, and if the requirement is met
s0< sz < s1 and t0< tz < t1
The coordinate is located within the print volume area and the translation process is performed on the coordinate.
sz(e)=sz+Ts
tz(e)=tz+Tt
The horizontal coordinate translation amount Ts and the vertical coordinate translation amount Tt are obtained by calculating all coordinate point sets which belong to the same annotation layer and are adjacent to the coordinate.
In the array a, the color value of the coordinate point of the annotation layer corresponding to the adjusted sz (e) tz (e) is modified into a corresponding color, and the color value of the coordinate point of the original annotation layer is modified into a color value D0 of the background color layer, that is, the movement of the annotation is realized.
By the mode of the embodiment, document scanning is enhanced by utilizing infrared rays, and various innovative functions such as annotation identification, color identification and the like are realized; in addition, the problems that the scanning margin encountered by the current user is fuzzy, the binarization processing effect is not good, and the handwriting and the printed matter can not be distinguished can be solved well.
The embodiment of the invention also provides a storage medium. Alternatively, in the present embodiment, the storage medium may be configured to store program codes for performing the following steps:
s1: acquiring image data of a file to be processed through first light to obtain color values of pixel points in first image data of the file to be processed and first coordinate information of the pixel points of the first image in the first image data;
s2: acquiring image data of the file to be processed through second light to obtain first coordinate information of pixel points in the second image data of the file to be processed;
s2: and adjusting the pixel points in the second image data corresponding to the coordinate information of the pixel points in the first image data according to the color values of the pixel points in the first image data.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (13)

1. A method of processing a file, comprising:
acquiring image data of a file to be processed through first light to obtain color values of pixel points in first image data of the file to be processed and first coordinate information of the pixel points of the first image data in the first image data;
acquiring image data of the file to be processed through second light to obtain first coordinate information of pixel points in second image data of the file to be processed;
adjusting the pixel points in the second image data corresponding to the coordinate information of the pixel points in the first image data according to the color values of the pixel points in the first image data comprises:
grouping pixel points in the first image data according to a plurality of preset color value intervals to obtain a plurality of grouped layers, wherein the color values of the pixel points in the same layer are in the same color value interval;
and adjusting color values of pixel points in the second image data corresponding to the coordinate information of the pixel points in the one or more image layers according to the color values of the pixel points in the one or more image layers, and/or adjusting positions of the pixel points in the second image data corresponding to the coordinate information of the pixel points in the one or more image layers.
2. The method of claim 1, wherein the adjusting the color value of the pixel point in the second image data corresponding to the coordinate information of the pixel point in the one or more layers according to the color value of the pixel point in the one or more layers further comprises:
adjusting color values of pixel points corresponding to other coordinate values in the second image data except the coordinate value corresponding to the first image layer in the plurality of image layers to be consistent with the color value of the first image layer in the plurality of image layers;
and the first image layer is a bottom color image layer of the file to be processed.
3. The method of claim 1, wherein the adjusting the color value of the pixel point in the second image data corresponding to the coordinate information of the pixel point in the one or more layers according to the color value of the pixel point in the one or more layers further comprises:
adjusting color values of pixel points corresponding to other coordinate values in the second image data except for the coordinate value corresponding to the second layer in the plurality of layers to be consistent with the color value of the second layer in the plurality of layers;
the second layer is the other layers except the bottom color layer in the plurality of layers, and the second layer comprises one or more sub-image layers.
4. A method according to claim 3, characterized in that, in case the second layer comprises a plurality of sub-layers,
after adjusting color values of pixel points corresponding to other coordinate values in the second image data except for the coordinate value corresponding to the second layer in the plurality of layers to be consistent with the color value of the second layer in the plurality of layers, adjusting color values of part of sub-layers in the plurality of sub-layers consistent with the color value of the second layer in the second image data to be consistent with the color value of the first layer; the first image layer is a bottom color image layer of the file to be processed; and/or the presence of a gas in the gas,
after adjusting color values of pixel points corresponding to other coordinate values in the second image data except for the coordinate value corresponding to the second layer in the plurality of layers to be consistent with the color value of the second layer in the plurality of layers, adjusting color values of a part of sub-map layers in the plurality of sub-map layers consistent with the color value of the second layer in the second image data to be consistent with the color value of one sub-map layer in the plurality of sub-map layers.
5. The method of claim 1, wherein the adjusting, according to the color values of the pixel points in the one or more layers, the position of the pixel point in the second image data corresponding to the coordinate information of the pixel point in the one or more layers comprises:
acquiring coordinate information of a second layer in the first image data in the first coordinate information;
moving the position of a pixel point corresponding to the coordinate information of the second image layer in the second image data;
the second layer is the other layers except the bottom color layer in the plurality of layers, and the second layer comprises one or more sub-image layers.
6. The method of any one of claims 1 to 5, wherein the first light is invisible light and the second light is visible light.
7. A terminal, comprising:
a fill light assembly for emitting first and second lights;
the camera shooting assembly is used for acquiring image data of the file to be processed through the first light and acquiring the image data of the file to be processed through the second light;
the processor is used for obtaining color values of pixel points in first image data of the file to be processed and first coordinate information of the pixel points of the first image data in the first image data according to the image data acquired by the first light; obtaining first coordinate information of pixel points in second image data of the file to be processed according to the image data acquired by the second light; adjusting pixel points in the second image data corresponding to the coordinate information of the pixel points in the first image data according to the color values of the pixel points in the first image data;
the processor is further configured to group the pixel points in the first image data according to a plurality of preset color value intervals to obtain a plurality of grouped layers, where color values of the pixel points in the same layer are in the same color value interval; and adjusting color values of pixel points in the second image data corresponding to the coordinate information of the pixel points in the one or more image layers according to the color values of the pixel points in the one or more image layers, and/or adjusting positions of the pixel points in the second image data corresponding to the coordinate information of the pixel points in the one or more image layers.
8. The terminal of claim 7,
the processor is further configured to adjust color values of pixel points corresponding to coordinate values in the second image data, except for coordinate values corresponding to a first layer of the plurality of layers, to be consistent with color values of the first layer of the plurality of layers;
and the first image layer is a bottom color image layer of the file to be processed.
9. The terminal of claim 7,
the processor is further configured to adjust color values of pixel points corresponding to coordinate values in the second image data, except for coordinate values corresponding to a second layer of the plurality of layers, to be consistent with color values of the second layer of the plurality of layers;
the second layer is the other layers except the bottom color layer in the plurality of layers, and the second layer comprises one or more sub-image layers.
10. The terminal according to claim 9, wherein in case that the second layer comprises a plurality of sub-layers,
the processor is further configured to adjust color values of pixel points corresponding to coordinate values in the second image data, except for coordinate values corresponding to a second layer in the plurality of layers, to be consistent with color values of the second layer in the plurality of layers, and then adjust color values of a part of sub-layers in the plurality of sub-layers, consistent with color values of the second layer, in the second image data to be consistent with color values of the first layer; the first image layer is a bottom color image layer of the file to be processed; and/or the presence of a gas in the gas,
after adjusting color values of pixel points corresponding to other coordinate values in the second image data except for the coordinate value corresponding to the second layer in the plurality of layers to be consistent with the color value of the second layer in the plurality of layers, adjusting color values of a part of sub-map layers in the plurality of sub-map layers consistent with the color value of the second layer in the second image data to be consistent with the color value of one sub-map layer in the plurality of sub-map layers.
11. The terminal of claim 7,
the processor is further configured to acquire coordinate information of a second layer in the first image data in the first coordinate information; moving the position of a pixel point corresponding to the coordinate information of the second image layer in the second image data;
the second layer is the other layers except the bottom color layer in the plurality of layers, and the second layer comprises one or more sub-image layers.
12. The terminal of claim 7, wherein the camera assembly comprises: the light filtering switching component and the photosensitive component;
under the condition that the second light is filtered by the filtering switching component, the photosensitive component acquires image data of the file to be processed through the first light;
and under the condition that the first light is filtered out by the filtering switching component, the photosensitive component acquires image data of the file to be processed through the second light.
13. A terminal according to any of claims 7 to 12, wherein the first light is invisible light and the second light is visible light.
CN201710404417.8A 2017-06-01 2017-06-01 Method and terminal for processing files Active CN108986036B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710404417.8A CN108986036B (en) 2017-06-01 2017-06-01 Method and terminal for processing files
PCT/CN2018/078154 WO2018219005A1 (en) 2017-06-01 2018-03-06 File processing method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710404417.8A CN108986036B (en) 2017-06-01 2017-06-01 Method and terminal for processing files

Publications (2)

Publication Number Publication Date
CN108986036A CN108986036A (en) 2018-12-11
CN108986036B true CN108986036B (en) 2022-01-28

Family

ID=64454371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710404417.8A Active CN108986036B (en) 2017-06-01 2017-06-01 Method and terminal for processing files

Country Status (2)

Country Link
CN (1) CN108986036B (en)
WO (1) WO2018219005A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080732B (en) * 2019-11-12 2023-09-22 望海康信(北京)科技股份公司 Method and system for forming virtual map
US11595625B2 (en) * 2020-01-02 2023-02-28 Qualcomm Incorporated Mechanical infrared light filter
CN114543995B (en) * 2022-04-26 2022-07-08 南通市海视光电有限公司 Color recognition instrument for chemical liquid phase color detection and detection method thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1112759A (en) * 1993-12-09 1995-11-29 美国电报电话公司 Dropped-from document image compression
CN1477590A (en) * 2002-06-19 2004-02-25 微软公司 System and method for white writing board and voice frequency catching
CN102542548A (en) * 2011-12-30 2012-07-04 深圳市万兴软件有限公司 Method and device for correcting color between images
CN102687502A (en) * 2009-08-25 2012-09-19 Ip链有限公司 Reducing noise in a color image
CN104050651A (en) * 2014-06-19 2014-09-17 青岛海信电器股份有限公司 Scene image processing method and device
CN105205798A (en) * 2015-10-19 2015-12-30 北京经纬恒润科技有限公司 Image processing method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7231080B2 (en) * 2001-02-13 2007-06-12 Orbotech Ltd. Multiple optical input inspection system
US6947593B2 (en) * 2001-10-05 2005-09-20 Hewlett-Packard Development Company, Lp. Digital image processing
US7031549B2 (en) * 2002-02-22 2006-04-18 Hewlett-Packard Development Company, L.P. Systems and methods for enhancing tone reproduction
US8824785B2 (en) * 2010-01-27 2014-09-02 Dst Technologies, Inc. Segregation of handwritten information from typographic information on a document
CN103544722B (en) * 2013-10-25 2016-08-17 深圳市掌网立体时代视讯技术有限公司 The person's handwriting method of modifying of a kind of numeral painting and calligraphy and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1112759A (en) * 1993-12-09 1995-11-29 美国电报电话公司 Dropped-from document image compression
CN1477590A (en) * 2002-06-19 2004-02-25 微软公司 System and method for white writing board and voice frequency catching
CN102687502A (en) * 2009-08-25 2012-09-19 Ip链有限公司 Reducing noise in a color image
CN102542548A (en) * 2011-12-30 2012-07-04 深圳市万兴软件有限公司 Method and device for correcting color between images
CN104050651A (en) * 2014-06-19 2014-09-17 青岛海信电器股份有限公司 Scene image processing method and device
CN105205798A (en) * 2015-10-19 2015-12-30 北京经纬恒润科技有限公司 Image processing method and system

Also Published As

Publication number Publication date
WO2018219005A1 (en) 2018-12-06
CN108986036A (en) 2018-12-11

Similar Documents

Publication Publication Date Title
CN108228114B (en) Control method and storage medium
CN108234814B (en) Control method and storage medium
KR102598109B1 (en) Electronic device and method for providing notification relative to image displayed via display and image stored in memory based on image analysis
CN108986036B (en) Method and terminal for processing files
US9497355B2 (en) Image processing apparatus and recording medium for correcting a captured image
DE60116949T2 (en) FACE DETECTION METHOD
CN104104886B (en) Overexposure image pickup method and device
JP6755787B2 (en) Image processing equipment, image processing methods and programs
JP4935302B2 (en) Electronic camera and program
US11627227B2 (en) Image processing apparatus, image processing method, and storage medium
US20040247175A1 (en) Image processing method, image capturing apparatus, image processing apparatus and image recording apparatus
US20110134276A1 (en) Method and apparatus for managing an album
JP2018097481A (en) Image processing apparatus, control method, and program
JP2009246887A (en) Image processor
US10708446B2 (en) Information processing apparatus, control method, and storage medium
KR100350789B1 (en) Method of raw color adjustment and atmosphere color auto extract in a image reference system
CA3087897A1 (en) System for assembling composite group image from individual subject images
EP1300810A3 (en) Method and device for validating security papers
JP6232906B2 (en) Image processing apparatus and image processing program
KR101513931B1 (en) Auto-correction method of composition and image apparatus with the same technique
US20140355013A1 (en) Systems and Methods for Red-Eye Correction
Lozano-Fernández et al. Auto-cropping of phone camera color images to segment cardiac signals in ECG printouts
CN103177278A (en) Immediate and high-efficiency two-dimension code scanning method and special scanning device thereof
JP4292873B2 (en) Image processing method, image processing apparatus, and image recording apparatus
CN100417176C (en) Image processing apparatus and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant