CN115984126A - Optical image correction method and device based on input instruction - Google Patents

Optical image correction method and device based on input instruction Download PDF

Info

Publication number
CN115984126A
CN115984126A CN202211578708.6A CN202211578708A CN115984126A CN 115984126 A CN115984126 A CN 115984126A CN 202211578708 A CN202211578708 A CN 202211578708A CN 115984126 A CN115984126 A CN 115984126A
Authority
CN
China
Prior art keywords
optical image
data
input instruction
input
correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211578708.6A
Other languages
Chinese (zh)
Inventor
温建伟
邓迪旻
袁潮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhuohe Technology Co Ltd
Original Assignee
Beijing Zhuohe Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhuohe Technology Co Ltd filed Critical Beijing Zhuohe Technology Co Ltd
Priority to CN202211578708.6A priority Critical patent/CN115984126A/en
Publication of CN115984126A publication Critical patent/CN115984126A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses an optical image correction method and device based on an input instruction. Wherein, the method comprises the following steps: acquiring original data of an optical image and analysis data of the optical image; generating input instruction information according to the optical image analysis data and the optical image original data; inputting the input instruction information and the original data of the optical image into an Esse algorithm matrix to obtain a correction strategy; and correcting the original data of the optical image according to the correction strategy to obtain corrected image data. The invention solves the technical problems that in the optical image acquisition scheme in the prior art, only optimization and acquisition operation are carried out on an optical image, but no feasible scheme for correcting the optical image can directly generate a correction strategy according to input information for correction, so that the efficiency and the quality of optical image generation and processing are reduced.

Description

Optical image correction method and device based on input instruction
Technical Field
The invention relates to the field of image restoration processing, in particular to an optical image correction method and device based on an input instruction.
Background
Along with the continuous development of intelligent science and technology, people use intelligent equipment more and more among life, work, the study, use intelligent science and technology means, improved the quality of people's life, increased the efficiency of people's study and work.
At present, for high-precision camera systems or camera array devices in different scenes, light images with different precisions and different functionalities are generated and collected according to application requirements and requirements of different scenes, and the light images can be used for technical effects of security judgment, judgment of danger signals and the like. However, in the optical image capturing scheme in the prior art, only optimization and capturing operations are performed on the optical image, but no feasible scheme for correcting the optical image is available, which can directly generate a correction strategy according to input information for correction, thereby reducing the efficiency and quality of optical image generation and processing.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an optical image correction method and device based on an input instruction, which are used for at least solving the technical problems that in an optical image acquisition scheme in the prior art, only optimization and acquisition operations are carried out on an optical image, but no feasible scheme for correcting the optical image can directly generate a correction strategy according to input information for correction, and the generation and processing efficiency and quality of the optical image are reduced.
According to an aspect of an embodiment of the present invention, there is provided a light image correction method based on an input instruction, including: acquiring original data of an optical image and analysis data of the optical image; generating input instruction information according to the optical image analysis data and the optical image original data; inputting the input instruction information and the light image original data into an Airy algorithm matrix to obtain a correction strategy; and correcting the original light image data according to the correction strategy to obtain corrected image data.
Optionally, the acquiring the light image raw data and the light image analysis data includes: acquiring the original data of the optical image in a high-precision camera system; generating the light image analysis data from the light image raw data, wherein the light image analysis data comprises: and (4) a flaw analysis result and a correction instruction guide opinion.
Optionally, the generating input instruction information according to the light image analysis data and the light image raw data includes: training an input instruction generation model according to historical analysis data; generating a model through the input instruction, and inputting the image analysis data and the light image original data as feature vectors to obtain a first input result; and fitting the input information and the first input result to obtain the input instruction information.
Optionally, the inputting the input instruction information and the original data of the optical image into an ais algorithm matrix to obtain a correction policy includes: acquiring pixel coordinate parameters of the original data of the optical image; according to the pixel coordinate parameter of the original data of the light image and the input instruction information, passing through a formula
X=δ(W·[h,w]+c)
And generating the correction strategy, wherein X is correction strategy data, delta is an Essel matrix operator, W is the phase number of the optical image, h and W are pixel coordinate parameters, and c is input instruction information.
According to another aspect of the embodiments of the present invention, there is also provided an optical image correction apparatus based on an input instruction, including: the acquisition module is used for acquiring original data of the optical image and analysis data of the optical image; the generating module is used for generating input instruction information according to the optical image analysis data and the optical image original data; the calculation module is used for inputting the input instruction information and the original data of the optical image into an Esse algorithm matrix to obtain a correction strategy; and the correction module is used for correcting the original data of the optical image according to the correction strategy to obtain corrected image data.
Optionally, the obtaining module includes: the acquisition unit is used for acquiring the original data of the optical image in the high-precision camera system; a generating unit, configured to generate the light image analysis data according to the light image raw data, wherein the light image analysis data includes: and (4) a flaw analysis result and a correction instruction guide opinion.
Optionally, the generating module includes: the training unit is used for training an input instruction generation model according to historical analysis data; the input unit is used for generating a model through the input instruction, inputting the image analysis data and the light image original data as characteristic vectors and obtaining a first input result; and the fitting unit is used for fitting the input information and the first input result to obtain the input instruction information.
Optionally, the calculation module includes: the acquisition unit is used for acquiring pixel coordinate parameters of the light image original data; a calculation unit for calculating pixel coordinate parameters of the light image raw data and the input instruction information according to a formula
X=δ(W·[h,w]+c)
And generating the correction strategy, wherein X is correction strategy data, delta is an Essel matrix operator, W is the phase number of the optical image, h and W are pixel coordinate parameters, and c is input instruction information.
According to another aspect of the embodiments of the present invention, there is also provided a non-volatile storage medium including a stored program, wherein the program controls an apparatus in which the non-volatile storage medium is located to execute an optical image correction method based on an input instruction when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a processor and a memory; the memory has stored therein computer readable instructions for execution by the processor, wherein the computer readable instructions are executable to perform a method for optical image modification based on input instructions.
In the embodiment of the invention, the method comprises the steps of acquiring light image original data and light image analysis data; generating input instruction information according to the optical image analysis data and the optical image original data; inputting the input instruction information and the original data of the optical image into an Esse algorithm matrix to obtain a correction strategy; the method for correcting the original data of the optical image according to the correction strategy to obtain the corrected image data solves the technical problems that in an optical image acquisition scheme in the prior art, optimization and acquisition operations are only performed on the optical image, but no feasible scheme for correcting the optical image can directly generate the correction strategy according to input information for correction, and the efficiency and quality of optical image generation and processing are reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a method for optical image modification based on input instructions according to an embodiment of the present invention;
fig. 2 is a block diagram of a configuration of an optical image correction apparatus based on an input instruction according to an embodiment of the present invention;
fig. 3 is a block diagram of a terminal device for performing a method according to the invention, according to an embodiment of the invention;
fig. 4 is a memory unit for holding or carrying program code implementing a method according to the invention, according to an embodiment of the invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present invention, there is provided a method embodiment of a method for light image modification based on input instructions, it is noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
Example one
Fig. 1 is a flowchart of a method for correcting an optical image based on an input instruction according to an embodiment of the present invention, as shown in fig. 1, the method including the steps of:
step S102, acquiring light image raw data and light image analysis data.
Specifically, in order to solve the technical problems that in an optical image acquisition scheme in the prior art, only optimization and acquisition operations are performed on an optical image, but no feasible scheme for optical image correction is available, a correction strategy can be directly generated according to input information for correction, and the efficiency and quality of optical image generation and processing are reduced, the embodiment of the invention firstly needs to acquire optical image data acquired and shot by an original high-precision camera device and perform generation operations on optical image analysis data, so that the optical image analysis data is used for guiding a guidance operation result of an input instruction, and the technical effect of intelligently correcting the optical image is achieved.
Optionally, the acquiring of the light image raw data and the light image analysis data includes: acquiring the original data of the optical image in a high-precision camera system; generating the light image analysis data from the light image raw data, wherein the light image analysis data comprises: and (4) a flaw analysis result and a correction instruction guide opinion.
Specifically, after the original optical image data is acquired, the embodiment of the present invention needs to summarize and integrate the optical image original data and the optical image analysis data, so as to prepare for the subsequent utilization of the optical image analysis data and the processing of the instruction information, for example, acquiring the optical image original data and the optical image analysis data includes: acquiring the original data of the optical image in a high-precision camera system; generating the light image analysis data from the light image raw data, wherein the light image analysis data comprises: and (4) a flaw analysis result and a correction instruction guide opinion.
And step S104, generating input instruction information according to the optical image analysis data and the optical image original data.
Optionally, the generating input instruction information according to the light image analysis data and the light image raw data includes: training an input instruction generation model according to historical analysis data; generating a model through the input instruction, and inputting the image analysis data and the light image original data as feature vectors to obtain a first input result; and fitting the input information and the first input result to obtain the input instruction information.
Specifically, in the embodiment of the present invention, after acquiring the original optical image data, the intelligent correction operation needs to be performed on the original optical image data according to the correction input instruction of the user for the image, so that generating the input instruction information according to the optical image analysis data and the optical image original data includes: training an input instruction generation model according to historical analysis data; generating a model through the input instruction, and inputting the image analysis data and the light image original data as feature vectors to obtain a first input result; and fitting the input information and the first input result to obtain the input instruction information.
It should be noted that the model generated according to the historical analysis data training input instruction may be a source data set which is obtained by acquiring previous optical image data of the current camera array through a cloud server big data platform, and summarizing the optical image data correction result and the data before correction as hidden layer training reference data to obtain data which can be used for model training. The input instruction generation model adopted by the embodiment of the invention can be a DNN deep neural network model.
It should be further noted that the Deep Neural Network (DNN) is a multi-layer unsupervised neural network, and performs feature learning by using the output features of the previous layer as the input of the next layer, and after feature mapping layer by layer, maps the features of the existing spatial samples to another feature space, so as to learn to have better feature expression for the existing input. The deep neural network has a plurality of nonlinear mapping feature transformation, and can fit highly complex functions. If the deep structure is regarded as a neuron network, the core idea of the deep neural network can be described by three points as follows, (1) the pre-training of each layer of network adopts unsupervised learning; (2) Each layer is trained layer by layer through unsupervised learning, namely, the output of the previous layer is used as the input of the next layer; (3) The deep neural network used in the embodiment of the invention is mainly different from the traditional neural network in a training mechanism, and in order to overcome the defects that the traditional neural network is easy to over-fit, the training speed is low and the like, the deep neural network integrally adopts a training mechanism of layer-by-layer pre-training instead of a back propagation training mechanism of the traditional neural network, so that the advantage of adopting a DNN deep neural network model in the embodiment of the invention comprises the following steps: the defects of time consumption and labor consumption of manual design of features are overcome; obtaining primary characteristics of each layer through data pre-training layer by layer; distributed data learning is more efficient (exponential); compared with a shallow modeling mode, the deep modeling can more carefully and efficiently represent the actual complex nonlinear problem.
And S106, inputting the input instruction information and the original data of the optical image into an Esse algorithm matrix to obtain a correction strategy.
Optionally, the inputting the input instruction information and the original data of the optical image into an eis algorithm matrix to obtain a correction strategy includes: acquiring pixel coordinate parameters of the original data of the optical image; according to the pixel coordinate parameter of the original data of the light image and the input instruction information, passing through a formula
X=δ(W·[h,w]+c)
And generating the correction strategy, wherein X is correction strategy data, delta is an Essel matrix operator, W is the phase number of the optical image, h and W are pixel coordinate parameters, and c is input instruction information.
Specifically, in order to obtain a correction strategy for correction operation in the embodiment of the present invention, the input instruction information and the original data of the optical image may be input to an eis algorithm matrix, and obtaining the correction strategy includes: acquiring pixel coordinate parameters of the light image original data; according to the pixel coordinate parameters of the light image original data and the input instruction information, the purpose of fitting a matrix result is achieved by using the coordinate parameters of the light image and the input instruction information according to a correction formula, so that corresponding X in the Ais matrix is extracted, and a final binary element result is obtained.
And S108, correcting the original data of the optical image according to the correction strategy to obtain corrected image data.
Specifically, after the correction strategy is obtained by using the original optical image data and the optical image analysis data, the instruction analysis data of how to perform the correction operation on the optical image data to be corrected is obtained, so that the correction operation is performed according to the correction strategy.
Through the embodiment, the technical problems that in the optical image acquisition scheme in the prior art, optimization and acquisition operations are only performed on the optical image, but no feasible scheme for correcting the optical image can be used for directly generating a correction strategy according to input information for correction, and the efficiency and quality of optical image generation and processing are reduced are solved.
Example two
Fig. 2 is a block diagram of a configuration of an optical image correction apparatus based on an input instruction according to an embodiment of the present invention, as shown in fig. 2, the apparatus including:
the obtaining module 20 is configured to obtain raw data of the light image and analysis data of the light image.
Specifically, in order to solve the technical problems that in an optical image acquisition scheme in the prior art, only optimization and acquisition operations are performed on an optical image, but no feasible scheme for optical image correction is available, a correction strategy can be directly generated according to input information for correction, and the efficiency and quality of optical image generation and processing are reduced, the embodiment of the invention firstly needs to acquire optical image data acquired and shot by an original high-precision camera device and perform generation operations on optical image analysis data, so that the optical image analysis data is used for guiding a guidance operation result of an input instruction, and the technical effect of intelligently correcting the optical image is achieved.
Optionally, the obtaining module includes: the acquisition unit is used for acquiring the original data of the optical image in the high-precision camera system; a generating unit configured to generate the light image analysis data according to the light image raw data, wherein the light image analysis data includes: and (4) a flaw analysis result and a correction instruction guide opinion.
Specifically, after the original optical image data is acquired, the embodiment of the present invention needs to summarize and integrate the optical image original data and the optical image analysis data, so as to prepare for the subsequent utilization of the optical image analysis data and the processing of the instruction information, for example, acquiring the optical image original data and the optical image analysis data includes: acquiring the original data of the optical image in a high-precision camera system; generating the light image analysis data from the light image raw data, wherein the light image analysis data comprises: and (4) a flaw analysis result and a correction instruction guide opinion.
And a generating module 22, configured to generate input instruction information according to the optical image analysis data and the optical image raw data.
Optionally, the generating module includes: the training unit is used for training an input instruction generation model according to historical analysis data; the input unit is used for generating a model through the input instruction, inputting the image analysis data and the light image original data as characteristic vectors and obtaining a first input result; and the fitting unit is used for fitting the input information and the first input result to obtain the input instruction information.
Specifically, in the embodiment of the present invention, after acquiring the original optical image data, it is necessary to perform an intelligent correction operation on the original optical image data according to a correction input instruction for an image by a user, so that generating input instruction information according to the optical image analysis data and the optical image original data includes: training an input instruction generation model according to historical analysis data; generating a model through the input instruction, and inputting the image analysis data and the light image original data as feature vectors to obtain a first input result; and fitting the input information and the first input result to obtain the input instruction information.
It should be noted that the model generated according to the historical analysis data training input instruction may be a source data set which is obtained by acquiring previous optical image data of the current camera array through a cloud server big data platform, and summarizing the optical image data correction result and the data before correction as hidden layer training reference data to obtain data which can be used for model training. The input instruction generation model adopted by the embodiment of the invention can be a DNN deep neural network model.
It should be further noted that the Deep Neural Network (DNN) is a multi-layer unsupervised neural network, and performs feature learning by using the output features of the previous layer as the input of the next layer, and after feature mapping layer by layer, maps the features of the existing spatial samples to another feature space, so as to learn to have better feature expression for the existing input. The deep neural network has a plurality of nonlinear mapping feature transformations, and can fit highly complex functions. If the deep structure is regarded as a neuron network, the core idea of the deep neural network can be described by three points as follows, (1) the pre-training of each layer of network adopts unsupervised learning; (2) Each layer is trained layer by layer through unsupervised learning, namely, the output of the previous layer is used as the input of the next layer; (3) The deep neural network used in the embodiment of the invention is mainly different from the traditional neural network in a training mechanism, and in order to overcome the defects that the traditional neural network is easy to over-fit, the training speed is low and the like, the deep neural network integrally adopts a training mechanism of layer-by-layer pre-training instead of a back propagation training mechanism of the traditional neural network, so that the advantage of adopting a DNN deep neural network model in the embodiment of the invention comprises the following steps: the defects of time consumption and labor consumption of manual design of features are overcome; obtaining primary characteristics of each layer through data pre-training layer by layer; distributed data learning is more efficient (exponential); compared with a shallow modeling mode, the deep modeling can more carefully and efficiently represent the actual complex nonlinear problem.
And the calculation module 24 is configured to input the input instruction information and the original optical image data to an eis algorithm matrix to obtain a correction strategy.
Optionally, the calculating module includes: the acquisition unit is used for acquiring pixel coordinate parameters of the light image original data; a calculation unit for calculating pixel coordinate parameters of the light image raw data and the input instruction information according to a formula
X=δ(W·[h,w]+c)
And generating the correction strategy, wherein X is correction strategy data, delta is an Essel matrix operator, W is the phase number of the optical image, h and W are pixel coordinate parameters, and c is input instruction information.
Specifically, in order to obtain a correction strategy for correction operation in the embodiment of the present invention, the input instruction information and the original data of the optical image may be input to an eis algorithm matrix, and obtaining the correction strategy includes: acquiring pixel coordinate parameters of the light image original data; according to the pixel coordinate parameters of the original data of the light image and the input instruction information, the purpose of fitting a matrix result is achieved by using the coordinate parameters of the light image and the input instruction information according to a correction formula, so that corresponding X in the Gaussian matrix is extracted, and a final binary element result is obtained.
And a correcting module 26, configured to correct the original optical image data according to the correction policy, so as to obtain corrected image data.
Specifically, after the correction strategy is obtained by using the original optical image data and the optical image analysis data, the instruction analysis data of how to perform the correction operation on the optical image data to be corrected is obtained, so that the correction operation is performed according to the correction strategy.
Through the embodiment, the technical problems that in the optical image acquisition scheme in the prior art, optimization and acquisition operations are only performed on the optical image, but no feasible scheme for correcting the optical image can be used for directly generating a correction strategy according to input information for correction, and the efficiency and quality of optical image generation and processing are reduced are solved.
According to another aspect of the embodiments of the present invention, there is also provided a non-volatile storage medium including a stored program, wherein the program controls an apparatus in which the non-volatile storage medium is located to execute an optical image correction method based on an input instruction when running.
Specifically, the method comprises the following steps: acquiring original data of an optical image and analysis data of the optical image; generating input instruction information according to the optical image analysis data and the optical image original data; inputting the input instruction information and the original data of the optical image into an Esse algorithm matrix to obtain a correction strategy; and correcting the original light image data according to the correction strategy to obtain corrected image data. Optionally, the acquiring of the light image raw data and the light image analysis data includes: acquiring the original data of the optical image in a high-precision camera system; generating the light image analysis data from the light image raw data, wherein the light image analysis data comprises: and (4) a flaw analysis result and a correction instruction guide opinion. Optionally, the generating input instruction information according to the light image analysis data and the light image raw data includes: training an input instruction generation model according to historical analysis data; generating a model through the input instruction, and inputting the image analysis data and the light image original data as feature vectors to obtain a first input result; and fitting the input information and the first input result to obtain the input instruction information. Optionally, the inputting the input instruction information and the original data of the optical image into an eis algorithm matrix to obtain a correction strategy includes: acquiring pixel coordinate parameters of the light image original data; and generating the correction strategy through a formula X = delta (W · [ h, W ] + c) according to the pixel coordinate parameters of the light image original data and the input instruction information, wherein X is correction strategy data, delta is an Esse matrix operator, W is a light image phase number, h and W are pixel coordinate parameters, and c is input instruction information.
According to another aspect of the embodiments of the present invention, there is also provided an electronic apparatus, including a processor and a memory; the memory has stored therein computer readable instructions for execution by the processor, wherein the computer readable instructions are executable to perform a method for optical image modification based on input instructions.
Specifically, the method comprises the following steps: acquiring light image original data and light image analysis data; generating input instruction information according to the optical image analysis data and the optical image original data; inputting the input instruction information and the original data of the optical image into an Esse algorithm matrix to obtain a correction strategy; and correcting the original light image data according to the correction strategy to obtain corrected image data. Optionally, the acquiring of the light image raw data and the light image analysis data includes: acquiring the original data of the optical image in a high-precision camera system; generating the light image analysis data from the light image raw data, wherein the light image analysis data comprises: and correcting the instruction guide opinions according to the defect analysis result. Optionally, the generating input instruction information according to the light image analysis data and the light image raw data includes: training an input instruction generation model according to historical analysis data; generating a model through the input instruction, and inputting the image analysis data and the light image original data as feature vectors to obtain a first input result; and fitting the input information and the first input result to obtain the input instruction information. Optionally, the inputting the input instruction information and the original data of the optical image into an eis algorithm matrix to obtain a correction strategy includes: acquiring pixel coordinate parameters of the original data of the optical image; and generating the correction strategy through a formula X = delta (W · [ h, W ] + c) according to the pixel coordinate parameters of the light image original data and the input instruction information, wherein X is correction strategy data, delta is an Esse matrix operator, W is a light image phase number, h and W are pixel coordinate parameters, and c is input instruction information.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technical content can be implemented in other manners. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, fig. 3 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present application. As shown in fig. 3, the terminal device may include an input device 30, a processor 31, an output device 32, a memory 33, and at least one communication bus 34. The communication bus 34 is used to realize communication connections between the elements. The memory 33 may comprise a high speed RAM memory, and may also include a non-volatile memory NVM, such as at least one disk memory, in which various programs may be stored for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the processor 31 may be implemented by, for example, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the processor 31 is coupled to the input device 30 and the output device 32 through a wired or wireless connection.
Optionally, the input device 30 may include a variety of input devices, for example, at least one of a user-oriented user interface, a device-oriented device interface, a software programmable interface, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware plug-in interface (e.g., a USB interface, a serial port, etc.) for data transmission between devices; optionally, the user-facing user interface may be, for example, a user-facing control key, a voice input device for receiving voice input, and a touch sensing device (e.g., a touch screen with a touch sensing function, a touch pad, etc.) for receiving user touch input; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, such as an input pin interface or an input interface of a chip; optionally, the transceiver may be a radio frequency transceiver chip with a communication function, a baseband processing chip, a transceiver antenna, and the like. An audio input device such as a microphone may receive voice data. The output device 32 may include a display, a sound, or other output device.
In this embodiment, the processor of the terminal device includes a module for executing functions of each module of the data processing apparatus in each device, and specific functions and technical effects may be obtained by referring to the foregoing embodiments, which are not described herein again.
Fig. 4 is a schematic hardware structure diagram of a terminal device according to another embodiment of the present application. Fig. 4 is a specific embodiment of fig. 3 in an implementation process. As shown in fig. 4, the terminal device of the present embodiment includes a processor 41 and a memory 42.
The processor 41 executes the computer program code stored in the memory 42 to implement the method in the above-described embodiment.
The memory 42 is configured to store various types of data to support operations at the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, such as messages, pictures, videos, and so forth. The memory 42 may include a Random Access Memory (RAM) and may also include a non-volatile memory (non-volatile at i-e memory), such as at least one disk memory.
Optionally, the processor 41 is provided in the processing assembly 40. The terminal device may further include: a communication component 43, a power component 44, a multimedia component 45, an audio component 46, an input/output interface 47 and/or a sensor component 48. The specific components included in the terminal device are set according to actual requirements, which is not limited in this embodiment.
The processing component 40 generally controls the overall operation of the terminal device. Processing component 40 may include one or more processors 41 to execute instructions to perform all or a portion of the steps of the above-described method. Further, processing component 40 may include one or more modules that facilitate interaction between processing component 40 and other components. For example, the processing component 40 may include a multimedia module to facilitate interaction between the multimedia component 45 and the processing component 40.
The power supply component 44 provides power to the various components of the terminal device. The power components 44 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the terminal device.
The multimedia component 45 includes a display screen that provides an output interface between the terminal device and the user. In some embodiments, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 46 is configured to output and/or input audio signals. For example, the audio component 46 includes a Microphone (MIC) configured to receive external audio signals when the terminal device is in an operational mode, such as a voice recognition mode. The received audio signal may further be stored in the memory 42 or transmitted via the communication component 43. In some embodiments, audio assembly 46 also includes a speaker for outputting audio signals.
The input/output interface 47 provides an interface between the processing component 40 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: a volume button, a start button, and a lock button.
The sensor assembly 48 includes one or more sensors for providing various aspects of status assessment for the terminal device. For example, the sensor assembly 48 may detect the open/closed status of the terminal device, the relative positioning of the assemblies, the presence or absence of user contact with the terminal device. The sensor assembly 48 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact, including detecting the distance between the user and the terminal device. In some embodiments, the sensor assembly 48 may also include a camera or the like.
The communication component 43 is configured to facilitate communication between the terminal device and other devices in a wired or wireless manner. The terminal device may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one embodiment, the terminal device may include a SIM card slot for inserting a SIM card therein, so that the terminal device can log on to a GPRS network and establish communication with the server via the internet.
From the above, the communication component 43, the audio component 46, the input/output interface 47 and the sensor component 48 referred to in the embodiment of fig. 4 can be implemented as the input device in the embodiment of fig. 3.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. An optical image correction method based on an input instruction, comprising:
acquiring original data of an optical image and analysis data of the optical image;
generating input instruction information according to the optical image analysis data and the optical image original data;
inputting the input instruction information and the original data of the optical image into an Esse algorithm matrix to obtain a correction strategy;
and correcting the original data of the optical image according to the correction strategy to obtain corrected image data.
2. The method of claim 1, wherein the acquiring light image raw data and light image analysis data comprises:
acquiring the original data of the optical image in a high-precision camera system;
generating the light image analysis data from the light image raw data, wherein the light image analysis data comprises: and (4) a flaw analysis result and a correction instruction guide opinion.
3. The method of claim 1, wherein generating input instruction information from the light image analysis data and the light image raw data comprises:
training an input instruction generation model according to historical analysis data;
generating a model through the input instruction, and inputting the image analysis data and the light image original data as feature vectors to obtain a first input result;
and fitting the input information and the first input result to obtain the input instruction information.
4. The method of claim 1, wherein inputting the input instruction information and the raw data of the light image into an eismic algorithm matrix to obtain a correction strategy comprises:
acquiring pixel coordinate parameters of the light image original data;
according to the pixel coordinate parameter of the original data of the light image and the input instruction information, passing through a formula
X=δ(W·[h,w]+c)
And generating the correction strategy, wherein X is correction strategy data, delta is an Essel matrix operator, W is the phase number of the optical image, h and W are pixel coordinate parameters, and c is input instruction information.
5. An optical image correction apparatus based on an input command, comprising:
the acquisition module is used for acquiring original data of the optical image and analysis data of the optical image;
the generating module is used for generating input instruction information according to the optical image analysis data and the optical image original data;
the calculation module is used for inputting the input instruction information and the original data of the optical image into an Esse algorithm matrix to obtain a correction strategy;
and the correction module is used for correcting the original data of the optical image according to the correction strategy to obtain corrected image data.
6. The apparatus of claim 5, wherein the obtaining module comprises:
the acquisition unit is used for acquiring the original data of the optical image in the high-precision camera system;
a generating unit configured to generate the light image analysis data according to the light image raw data, wherein the light image analysis data includes: and (4) a flaw analysis result and a correction instruction guide opinion.
7. The apparatus of claim 5, wherein the generating module comprises:
the training unit is used for training an input instruction generation model according to historical analysis data;
the input unit is used for generating a model through the input instruction, and inputting the image analysis data and the light image original data as characteristic vectors to obtain a first input result;
and the fitting unit is used for fitting the input information and the first input result to obtain the input instruction information.
8. The apparatus of claim 5, wherein the computing module comprises:
the acquisition unit is used for acquiring pixel coordinate parameters of the light image original data;
a calculation unit for calculating pixel coordinate parameters of the light image raw data and the input instruction information according to a formula
X=δ(W·[h,w]+c)
And generating the correction strategy, wherein X is correction strategy data, delta is an Essel matrix operator, W is the phase number of the optical image, h and W are pixel coordinate parameters, and c is input instruction information.
9. A non-volatile storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the non-volatile storage medium is located to perform the method of any one of claims 1 to 4.
10. An electronic device comprising a processor and a memory; the memory has stored therein computer readable instructions for execution by the processor, wherein the computer readable instructions when executed perform the method of any one of claims 1 to 4.
CN202211578708.6A 2022-12-05 2022-12-05 Optical image correction method and device based on input instruction Pending CN115984126A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211578708.6A CN115984126A (en) 2022-12-05 2022-12-05 Optical image correction method and device based on input instruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211578708.6A CN115984126A (en) 2022-12-05 2022-12-05 Optical image correction method and device based on input instruction

Publications (1)

Publication Number Publication Date
CN115984126A true CN115984126A (en) 2023-04-18

Family

ID=85965700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211578708.6A Pending CN115984126A (en) 2022-12-05 2022-12-05 Optical image correction method and device based on input instruction

Country Status (1)

Country Link
CN (1) CN115984126A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630134A (en) * 2023-05-23 2023-08-22 北京拙河科技有限公司 Multithreading processing method and device for image data of light field camera
CN116883255A (en) * 2023-05-22 2023-10-13 北京拙河科技有限公司 Boundary correction method and device for high-precision light field image

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651898A (en) * 2016-11-08 2017-05-10 河海大学 Rapid Grabcut algorithm based on masking interpolations
WO2017084428A1 (en) * 2015-11-17 2017-05-26 努比亚技术有限公司 Information processing method, electronic device and computer storage medium
CN108053375A (en) * 2017-12-06 2018-05-18 智车优行科技(北京)有限公司 Image data correction method, device and its automobile
CN111061996A (en) * 2019-12-09 2020-04-24 昆明理工大学 Recommendation algorithm combining Word2vec Word vector and LSH locality sensitive hashing
CN113179375A (en) * 2021-06-09 2021-07-27 北京澎思科技有限公司 Exposure processing method, exposure processing apparatus, electronic device, storage medium, and program product
CN114528900A (en) * 2020-10-30 2022-05-24 罗伯特·博世有限公司 Improving deep neural networks via prototype factorization
CN114764753A (en) * 2021-01-15 2022-07-19 晶晨半导体(上海)股份有限公司 Image processing method, image processing system, electronic device, and storage medium
CN114999092A (en) * 2022-06-10 2022-09-02 北京拙河科技有限公司 Disaster early warning method and device based on multiple forest fire model
CN115100209A (en) * 2022-08-28 2022-09-23 电子科技大学 Camera-based image quality correction method and system
CN115330626A (en) * 2022-08-18 2022-11-11 北京拙河科技有限公司 Picture transformation method and device based on mesh grid network decomposition
CN115375582A (en) * 2022-09-05 2022-11-22 北京拙河科技有限公司 Moire digestion method and device based on low-order Taylor decomposition

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017084428A1 (en) * 2015-11-17 2017-05-26 努比亚技术有限公司 Information processing method, electronic device and computer storage medium
CN106651898A (en) * 2016-11-08 2017-05-10 河海大学 Rapid Grabcut algorithm based on masking interpolations
CN108053375A (en) * 2017-12-06 2018-05-18 智车优行科技(北京)有限公司 Image data correction method, device and its automobile
CN111061996A (en) * 2019-12-09 2020-04-24 昆明理工大学 Recommendation algorithm combining Word2vec Word vector and LSH locality sensitive hashing
CN114528900A (en) * 2020-10-30 2022-05-24 罗伯特·博世有限公司 Improving deep neural networks via prototype factorization
CN114764753A (en) * 2021-01-15 2022-07-19 晶晨半导体(上海)股份有限公司 Image processing method, image processing system, electronic device, and storage medium
CN113179375A (en) * 2021-06-09 2021-07-27 北京澎思科技有限公司 Exposure processing method, exposure processing apparatus, electronic device, storage medium, and program product
CN114999092A (en) * 2022-06-10 2022-09-02 北京拙河科技有限公司 Disaster early warning method and device based on multiple forest fire model
CN115330626A (en) * 2022-08-18 2022-11-11 北京拙河科技有限公司 Picture transformation method and device based on mesh grid network decomposition
CN115100209A (en) * 2022-08-28 2022-09-23 电子科技大学 Camera-based image quality correction method and system
CN115375582A (en) * 2022-09-05 2022-11-22 北京拙河科技有限公司 Moire digestion method and device based on low-order Taylor decomposition

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
OLIVER BIMBER等: "Synthetic Aperture Imaging With Drones", IEEE COMPUTER GRAPHICS AND APPLICATIONS, vol. 39, no. 3, pages 8, XP011721018, DOI: 10.1109/MCG.2019.2896024 *
舒亚海;贾倩茜;张超;周元;: "基于多尺度分析和神经网络的目标识别方法", 雷达与对抗, no. 02, pages 33 - 36 *
陈伟等: "混合噪声图像的去噪算法研究及实现", 电子测量技术, vol. 44, no. 16, pages 135 - 140 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883255A (en) * 2023-05-22 2023-10-13 北京拙河科技有限公司 Boundary correction method and device for high-precision light field image
CN116630134A (en) * 2023-05-23 2023-08-22 北京拙河科技有限公司 Multithreading processing method and device for image data of light field camera

Similar Documents

Publication Publication Date Title
CN115984126A (en) Optical image correction method and device based on input instruction
CN109086742A (en) scene recognition method, scene recognition device and mobile terminal
CN111367407B (en) Intelligent glasses interaction method, intelligent glasses interaction device and intelligent glasses
CN115426525B (en) High-speed dynamic frame linkage image splitting method and device
CN115170818A (en) Dynamic frame image feature extraction method and device
CN115375582A (en) Moire digestion method and device based on low-order Taylor decomposition
CN114866702A (en) Multi-auxiliary linkage camera shooting technology-based border monitoring and collecting method and device
CN115809006B (en) Method and device for controlling manual instructions through picture
CN115187570B (en) Singular traversal retrieval method and device based on DNN deep neural network
CN115205313B (en) Picture optimization method and device based on least square algorithm
CN116389915B (en) Method and device for reducing flicker of light field camera
CN116664413B (en) Image volume fog eliminating method and device based on Abbe convergence operator
CN116088580B (en) Flying object tracking method and device
CN116402935B (en) Image synthesis method and device based on ray tracing algorithm
CN116485912B (en) Multi-module coordination method and device for light field camera
CN116468883B (en) High-precision image data volume fog recognition method and device
CN116228593B (en) Image perfecting method and device based on hierarchical antialiasing
CN116797479B (en) Image vertical distortion conversion method
CN116744102B (en) Ball machine tracking method and device based on feedback adjustment
CN115511735B (en) Snow field gray scale picture optimization method and device
CN115546053B (en) Method and device for eliminating diffuse reflection of graphics on snow in complex terrain
CN116723419B (en) Acquisition speed optimization method and device for billion-level high-precision camera
CN116363006B (en) Image calibration method and device based on normal algorithm
CN117367455A (en) Deep learning algorithm unmanned aerial vehicle route design method and device for photovoltaic power station
CN116579964A (en) Dynamic frame gradual-in gradual-out dynamic fusion method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination