CN118334361A - Image processing method and device based on artificial intelligent chip, medium and program - Google Patents

Image processing method and device based on artificial intelligent chip, medium and program Download PDF

Info

Publication number
CN118334361A
CN118334361A CN202410338655.3A CN202410338655A CN118334361A CN 118334361 A CN118334361 A CN 118334361A CN 202410338655 A CN202410338655 A CN 202410338655A CN 118334361 A CN118334361 A CN 118334361A
Authority
CN
China
Prior art keywords
image
target
processed
chip
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410338655.3A
Other languages
Chinese (zh)
Inventor
刘瑞哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shushang Times Technology Co ltd
Original Assignee
Shenzhen Shushang Times Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shushang Times Technology Co ltd filed Critical Shenzhen Shushang Times Technology Co ltd
Priority to CN202410338655.3A priority Critical patent/CN118334361A/en
Publication of CN118334361A publication Critical patent/CN118334361A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The application relates to the technical field of image processing, in particular to an image processing method and device based on an artificial intelligent chip, a medium and a program, wherein the method comprises the following steps: acquiring an image to be processed; extracting features of the image to be processed through a video processing chip to obtain a target feature set; and carrying out image recognition on the target feature set through an artificial intelligent chip to obtain a target recognition result. The embodiment of the application can improve the video processing efficiency.

Description

Image processing method and device based on artificial intelligent chip, medium and program
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus based on an artificial intelligent chip, and a medium and program.
Background
Along with the wide popularization and application of electronic devices (mobile phones, tablet computers and the like), the electronic devices develop towards diversification and individuation, the applications which can be supported by the electronic devices are more and more, and the functions are more and more powerful, so that the electronic devices become indispensable electronic articles in the life of users.
Currently, an electronic device may be provided with a video processing chip (such as a graphics processor (graphics processing unit, GPU)) and an artificial intelligent chip (ARTIFICIAL INTELLIGENCE, AI chip), and the video processing process is to process an image (such as image segmentation, graying and image enhancement) by the video processing chip and then transmit the processed image to an AI chip for processing such as image recognition, and when the AI chip performs image recognition, features in the image need to be extracted first, and then operations such as matching are performed according to the features to obtain a result.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device and a storage medium, which can improve video processing efficiency.
In a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring an image to be processed;
Extracting features of the image to be processed through a video processing chip to obtain a target feature set;
and carrying out image recognition on the target feature set through an artificial intelligent chip to obtain a target recognition result.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: an acquisition unit, an extraction unit and an identification unit, wherein,
The acquisition unit is used for acquiring the image to be processed;
The extraction unit is used for extracting the characteristics of the image to be processed through a video processing chip to obtain a target characteristic set;
The identification unit is used for carrying out image identification on the target feature set through the artificial intelligent chip to obtain a target identification result.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, the program including instructions for performing the steps in the first aspect of the embodiment of the present application, and the processor including a video processing chip or an artificial intelligence chip.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program for electronic data exchange, wherein the computer program causes a computer to perform part or all of the steps described in the first aspect of the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has the following beneficial effects:
It can be seen that, according to the image processing method, the device and the storage medium described in the embodiments of the present application, an image to be processed is obtained, feature extraction is performed on the image to be processed through the video processing chip, a target feature set is obtained, and image recognition is performed on the target feature set through the artificial intelligent chip, so that a target recognition result is obtained.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 2 is a flowchart of another image processing method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
Fig. 4 is a block diagram showing functional units of an image processing apparatus according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to the list of steps or elements but may include, in one possible example, other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In the embodiment of the present application, the electronic device related to the embodiment of the present application may include various handheld devices (mobile phones, tablet computers, etc.) with wireless communication functions, desktop computers, vehicle-mounted devices, wearable devices (smart watches, smart bracelets, wireless headphones, augmented reality/virtual reality devices, smart glasses), computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), mobile Stations (MSs), terminal devices (TERMINAL DEVICE), and so on. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices, and the above-mentioned electronic devices may also be servers.
Embodiments of the present application are described in detail below.
Referring to fig. 1, fig. 1 is a flowchart of an image processing method according to an embodiment of the present application, as shown in the drawings, the image processing method includes:
101. And acquiring an image to be processed.
In the embodiment of the application, the image to be processed may be one or more images, and the image to be processed may be a gray image or a color image, for example, the image to be processed may be at least one frame of image in the target video. The electronic device may include a video processing chip, which may be a central processing unit (central processing unit, CPU), GPU, and an artificial intelligence chip (AI chip), which may be an embedded neural network processor (Neural-network Processing Unit, NPU).
Optionally, the step 101 of acquiring the image to be processed may include the following steps:
11. acquiring target chip parameters of the artificial intelligent chip;
12. determining a first shooting parameter set corresponding to the target chip parameter according to a mapping relation between a preset chip parameter and a shooting parameter;
13. Acquiring a target environment parameter;
14. Determining a second shooting parameter set corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and a shooting parameter;
15. Determining an intersection between the first shooting parameter set and the second shooting parameter set to obtain a target shooting parameter set;
16. and shooting according to the target shooting parameter set to obtain the image to be processed.
In the embodiment of the present application, the chip parameter may be at least one of the following: the chip model, chip specification, chip run, chip manufacturer, chip production date, chip core number, clock frequency, bandwidth, operating voltage, stability, design accuracy, etc., are not limited herein. The photographing parameter may be at least one of: the sensitivity, exposure time, focal length, white balance parameter, zoom parameter, and the like are not limited herein. The environmental parameter may be at least one of: ambient light weather, ambient temperature, ambient humidity, barometric pressure, etc., are not limited herein. The mapping relation between the preset chip parameters and the shooting parameters and the mapping relation between the preset environment parameters and the shooting parameters can be stored in the electronic equipment in advance.
Specifically, the electronic device may acquire a target chip parameter of the artificial intelligent chip, determine a first shooting parameter set corresponding to the target chip parameter according to a mapping relationship between a preset chip parameter and a shooting parameter, and also acquire a target environment parameter, determine a second shooting parameter set corresponding to the target environment parameter according to a mapping relationship between the preset environment parameter and the shooting parameter, determine an intersection between the first shooting parameter set and the second shooting parameter set, and obtain a target shooting parameter set, and finally shoot according to the target shooting parameter set to obtain an image to be processed, so that an image appropriate to the chip performance and the environment can be obtained, which is helpful for guaranteeing real-time performance and recognition accuracy of the artificial intelligent chip processing.
102. And extracting the characteristics of the image to be processed through a video processing chip to obtain a target characteristic set.
In a specific implementation, the target feature set may include at least one feature, which may be at least one of: the feature points, feature contours, feature textures, target areas, etc., are not limited herein. The electronic equipment can preprocess the image to be processed through the video processing chip, and then, the feature extraction is carried out on the preprocessed image to be processed, so that a target feature set is obtained. The pretreatment may be at least one of: image segmentation, graying, image enhancement, etc., are not limited herein.
In the embodiment of the application, the electronic equipment can perform feature extraction on the image to be processed through the video processing chip, so that a target feature set can be obtained, and then the target feature set can be input into the artificial intelligent chip for image recognition.
Optionally, in step 102, feature extraction is performed on the image to be processed by using a video processing chip to obtain a target feature set, which may include the following steps:
A21, performing image quality evaluation on the image to be processed through the video processing chip to obtain a target image quality evaluation value;
A22, when the target image quality evaluation value is in a first preset range, performing image enhancement processing on the image to be processed to obtain a first image, and performing feature extraction on the first image through the video processing chip to obtain the target feature set;
a23, when the target image quality evaluation value is larger than the upper limit value of the first preset range, extracting features of the image to be processed through the video processing chip to obtain the target feature set.
In a specific implementation, the electronic device may perform image quality evaluation on the image to be processed by using at least one image quality evaluation index through the video processing chip to obtain a target image quality evaluation value, where the image quality evaluation index may be at least one of the following: information entropy, average gradient, average gray scale, sharpness, signal-to-noise ratio, edge-preserving degree, etc., are not limited herein. The first preset range may be set by the user himself or by default. The first preset range may be an empirical value.
In the embodiment of the application, when the target image quality evaluation value is in the first preset range, the electronic equipment can indicate that the image quality is moderate, then the image to be processed is subjected to image enhancement processing to obtain the first image, so that more features are extracted through the image enhancement processing to improve the subsequent image recognition precision, and further, the first image can be subjected to feature extraction through the video processing chip to obtain a target feature set, and further, more image features can be extracted because the image to be processed is subjected to the image enhancement processing; when the target image quality evaluation value is greater than the upper limit value of the first preset range, the image quality is good enough at this time, enough features can be extracted to meet the requirement of image recognition, namely image enhancement processing is not needed at this time, and further, the electronic equipment can directly extract features of the image to be processed through the video processing chip, so that a target feature set is obtained. Of course, when the target image quality evaluation value is smaller than the lower limit value of the first preset range, the image quality is too poor, and then the requirement of subsequent image recognition is not met if the image is enhanced, and the image to be processed can be re-acquired or the subsequent feature extraction and the image recognition are not executed.
Optionally, in the step a22, the image enhancement processing is performed on the image to be processed to obtain a first image, which may include the following steps:
A221, performing multi-scale decomposition on the image to be processed to obtain a low-frequency characteristic component and a high-frequency characteristic component;
A222, determining a target energy ratio between the low-frequency characteristic component and the high-frequency characteristic component;
A223, when the target energy ratio is in a second preset range, performing global image enhancement processing on the image to be processed to obtain the first image;
A224, when the target energy value is larger than the upper limit value of the second preset range, performing image enhancement processing on the high-frequency characteristic component, and performing reconstruction operation corresponding to the sub-scale decomposition on the high-frequency characteristic component and the low-frequency characteristic component after the image enhancement processing to obtain the first image;
and A225, when the target energy value is smaller than the lower limit value of the second preset range, performing image enhancement processing on the low-frequency characteristic component, and performing reconstruction operation corresponding to the sub-scale decomposition on the low-frequency characteristic component and the high-frequency characteristic component after the image enhancement processing to obtain the first image.
In a specific implementation, the algorithm corresponding to the multi-scale decomposition in the embodiment of the present application may be at least one of the following: wavelet transform, laplace transform, contour wave transform, non-downsampled contour wave transform, ridge wave transform, shear wave transform, and the like, without limitation. The second preset range may be set by the user himself or by default. The second preset range may be an empirical value.
Specifically, the electronic device may perform multi-scale decomposition on the image to be processed to obtain a low-frequency feature component and a high-frequency feature component, where the low-frequency feature component may reflect a main portion of the image to a certain extent, and the high-frequency feature component may reflect detailed information of the image to a certain extent. Furthermore, the electronic device can determine a target energy ratio between the low-frequency characteristic component and the high-frequency characteristic component, and when the target energy ratio is in a second preset range, the distribution between the low frequency and the high frequency is moderate, and global image enhancement processing can be performed on the image to be processed to obtain a first image; when the target energy value is larger than the upper limit value of the second preset range, the defect of detail information is indicated, the electronic equipment can perform image enhancement processing on the high-frequency characteristic component, and the high-frequency characteristic component and the low-frequency characteristic component after the image enhancement processing are subjected to reconstruction operation corresponding to the sub-scale decomposition to obtain a first image; when the target energy value is smaller than the lower limit value of the second preset range, the detail information is rich, and then the image enhancement is needed to be carried out on the low frequency, and further, the electronic equipment can carry out image enhancement processing on the low frequency characteristic component, and carry out reconstruction operation corresponding to the split-scale decomposition on the low frequency characteristic component and the high frequency characteristic component after the image enhancement processing to obtain a first image.
Optionally, the step a223 may perform global image enhancement processing on the image to be processed to obtain the first image, and may include the following steps:
a2231, determining a target difference value between a target image quality evaluation value and a reference image quality evaluation value, wherein the reference image quality evaluation value is larger than the upper limit value of the first preset range;
A2232, determining a target image enhancement algorithm identifier corresponding to the target difference value according to a mapping relation between the preset difference value and the image enhancement algorithm identifier;
A2233, dividing the image to be processed into a plurality of areas, and determining the definition of each area in the plurality of areas to obtain a plurality of definitions;
a2234, determining target mean square deviations corresponding to the plurality of definitions;
A2235, determining a target fine tuning parameter corresponding to the target mean square error according to a mapping relation between the preset mean square error and the fine tuning parameter;
a2236, obtaining a target image enhancement algorithm corresponding to the target image enhancement algorithm identifier and a reference algorithm control parameter corresponding to the target image enhancement algorithm;
a2237, performing fine adjustment on the reference algorithm control parameter through the target fine adjustment parameter to obtain a target algorithm control parameter;
And A2238, performing image enhancement processing on the image to be processed according to the target image enhancement algorithm according to the target algorithm control parameters to obtain the first image.
In the embodiment of the application, the reference image quality evaluation value may be preset or default, the reference image quality evaluation value may be greater than the upper limit value of the first preset range, and if the image quality evaluation value is greater than or equal to the reference image quality evaluation value, the image quality is better. The mapping relation between the preset difference value and the image enhancement algorithm identifier and the mapping relation between the preset mean square error and the fine tuning parameter can be stored in the electronic equipment in advance. The different image enhancement algorithm identifications may correspond to different image enhancement algorithms, and the image enhancement processing algorithm may be at least one of: histogram equalization, gray stretching, retinex algorithm, etc., are not limited herein. Each image enhancement processing algorithm has corresponding algorithm control parameters, the algorithm control parameters are used for controlling the degree of image enhancement or controlling the region of algorithm enhancement, and the algorithm control parameters are adjusted to prevent underenhancement or overenhancement.
Specifically, the electronic device may determine a target difference between the target image quality evaluation value and the reference image quality evaluation value, and then set an ideal effect image according to a mapping relationship between a preset difference and an image enhancement algorithm identifier, compare the quality of the current image with the target image quality evaluation value, select a proper algorithm according to the difference, that is, determine the target image enhancement algorithm identifier corresponding to the target difference, and if different differences indicate that the image needs to be enhanced to a different degree, then, in this way, image enhancement processing is specifically implemented to prevent under enhancement or over enhancement.
Furthermore, the electronic device may divide the image to be processed into a plurality of regions, determine the definition of each region in the plurality of regions, obtain a plurality of definitions, determine a target mean square error corresponding to the plurality of definitions, and determine a target fine adjustment parameter corresponding to the target mean square error according to a mapping relationship between a preset mean square error and the fine adjustment parameter, where the mean square error reflects the stability of the image, and the relevance and consistency between the regions of the image, and in some cases, stains exist on the lens, or the lens is damaged, which may cause uneven image, and the consistency between the regions of the image may be damaged, and further, may cause that the algorithm may misjudge that the image quality is good, and further adjust the algorithm control parameter of the image enhancement algorithm by the mean square error, so that the image enhancement algorithm is more suitable for the characteristics of the current image.
Further, the electronic device may acquire a target image enhancement algorithm corresponding to the target image enhancement algorithm identifier and a reference algorithm control parameter corresponding to the target image enhancement algorithm, and fine-tune the reference algorithm control parameter through the target fine-tuning parameter to obtain the target algorithm control parameter, which specifically includes:
Target algorithm control parameter= (1+target fine tuning parameter) ×reference algorithm control parameter
Furthermore, the electronic device can perform image enhancement processing on the image to be processed according to the target algorithm control parameters and the target image enhancement algorithm to obtain the first image, so that the image enhancement effect can be improved, more image features can be extracted later, and the image recognition accuracy can be improved.
Optionally, in the step 22, feature extraction is performed on the image to be processed by using a video processing chip to obtain a target feature set, which may include the following steps:
b21, receiving an image recognition instruction, wherein the image recognition instruction carries image recognition requirement parameters, and the image recognition requirement parameters comprise at least one of the following: image recognition accuracy, image recognition object type;
B22, determining a target feature extraction algorithm according to the image recognition requirement parameters;
And B23, carrying out feature extraction on the image to be processed according to the target feature extraction algorithm to obtain the target feature set.
In a specific implementation, the image recognition requirement parameters include at least one of the following: image recognition accuracy, image recognition object type, etc., not limited herein, the image recognition object type may be at least one of: a person, animal, action, plant, expression, etc., without limitation, the target feature set may include a plurality of features, which may be at least one of: feature points, feature contours, feature vectors, target areas, and color distribution features. Different image recognition instructions, corresponding image recognition requirement parameters are different, different image recognition is performed, and used image features are also different, so that a proper feature extraction algorithm can be selected based on the requirement of image recognition, namely the image recognition requirement parameters, the feature extraction algorithm is used for extracting features in the features, the features are used for realizing image recognition, and the feature extraction algorithm can be at least one of the following: the harris corner detection algorithm, the scale invariant feature extraction algorithm, the hough transform, the region segmentation algorithm, and the like are not limited herein.
Specifically, the mapping relation between the image recognition requirement parameters and the feature extraction algorithm can be stored in the electronic equipment in advance, then, the target feature extraction algorithm corresponding to the image recognition requirement parameters can be determined based on the mapping relation, then, feature extraction is carried out on the image to be processed according to the target feature extraction algorithm to obtain a target feature set, and then, based on the purpose of image recognition, a proper feature extraction algorithm can be reversely deduced, so that the extracted features can be in deep accord with the requirement of image recognition, and the image recognition effect can be improved.
103. And carrying out image recognition on the target feature set through an artificial intelligent chip to obtain a target recognition result.
In a specific implementation, the electronic device may perform image recognition on the target feature set through the artificial intelligent chip, for example, implement image classification, target recognition, mode recognition, and so on, to obtain a target recognition result.
For example, in the related art, the video processing is mainly that the video processing chip such as the GPU processes the image (such as image segmentation, graying and image enhancement) first, and then transmits the processed image to the AI chip such as the NPU for processing such as image recognition, and when the AI chip performs image recognition, the AI chip needs to extract the features in the image first, and then performs operations such as matching according to the features to obtain a result. And the video processing chip and the AI chip can be achieved by using domestic chips.
It can be seen that, according to the image processing method described in the embodiment of the application, the image to be processed is obtained, the image to be processed is subjected to feature extraction through the video processing chip to obtain the target feature set, the image recognition is performed on the target feature set through the artificial intelligent chip to obtain the target recognition result, and the feature extraction of the image is placed in the video processing chip for processing, so that the data volume of the artificial intelligent chip can be reduced, and the video processing efficiency can be improved.
Referring to fig. 2, fig. 2 is a flowchart of an image processing method according to an embodiment of the present application, which is applied to an electronic device, and as shown in the drawings, the image processing method includes:
201. And acquiring an image to be processed.
202. And extracting features of the image to be processed through a video processing chip to obtain a target feature set, wherein the target feature set comprises a target feature point set and a target feature contour set.
The specific descriptions of steps 201 to 202 may refer to the corresponding steps of the image processing method described in fig. 1, and are not repeated herein.
203. And carrying out image recognition on the target feature point set through an artificial intelligent chip to obtain a first recognition result set.
In a specific implementation, the artificial intelligent chip may include an artificial neural network algorithm, where the artificial neural network algorithm may be at least one of the following: convolutional neural network neural model, impulse neural network model, fully connected neural network model, cyclic neural network model, and the like, without limitation herein. The image recognition of the target feature point set can be realized through an artificial neural network algorithm, namely the target feature point set is input into a neural network model, a plurality of recognition results can be obtained, each recognition result corresponds to one label and one probability value, and the plurality of recognition results and the probability value corresponding to each recognition result are used as a first recognition result set.
204. And carrying out image recognition on the target feature profile set through an artificial intelligent chip to obtain a second recognition result set.
In a specific implementation, image recognition can be performed on the target feature profile set through an artificial neural network algorithm, namely, the target feature profile set is input into a neural network model, so that a plurality of recognition results can be obtained, each recognition result corresponds to one label and one probability value, and the plurality of recognition results and the probability value corresponding to each recognition result are used as a second recognition result set.
205. And determining the target recognition result according to the first recognition result set and the second recognition result set.
In a specific implementation, the electronic device may determine a first weight corresponding to the first recognition result set through the target image quality evaluation value, specifically, a mapping relationship between the image quality evaluation value and the weight may be stored in the electronic device in advance, further, based on the mapping relationship, the first weight corresponding to the target image quality evaluation value may be determined, the second weight corresponding to the second recognition result is 1-first weight, then, a weighted operation is performed on the probability value of each recognition result in the first recognition result set and the first weight to obtain a plurality of first weighted probability values, and a weighted operation is performed on the probability value of each recognition result in the second recognition result set and the second weight to obtain a plurality of second weighted probability values, then, the weighted probability values of the same label are summed, and then, the maximum value in the weighted probability values after summation is selected, and the label corresponding to the maximum value is used as the target recognition result.
It can be seen that, in the image processing method described in the embodiment of the present application, an image to be processed is obtained, feature extraction is performed on the image to be processed by using a video processing chip to obtain a target feature set, the target feature set includes a target feature point set and a target feature profile set, image recognition is performed on the target feature point set by using an artificial intelligent chip to obtain a first recognition result set, image recognition is performed on the target feature profile set by using the artificial intelligent chip to obtain a second recognition result set, and a target recognition result is determined according to the first recognition result set and the second recognition result set.
In accordance with the above embodiment, referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application, as shown in the drawing, the electronic device includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the processor may be a video processing chip or an artificial intelligence chip, and in the embodiment of the present application, the programs include instructions for executing the following steps:
acquiring an image to be processed;
Extracting features of the image to be processed through a video processing chip to obtain a target feature set;
and carrying out image recognition on the target feature set through an artificial intelligent chip to obtain a target recognition result.
It can be seen that, in the electronic device described in the embodiment of the present application, an image to be processed is obtained, feature extraction is performed on the image to be processed through the video processing chip, a target feature set is obtained, and image recognition is performed on the target feature set through the artificial intelligent chip, so that a target recognition result is obtained.
Optionally, in the aspect that the feature extraction is performed on the image to be processed by the video processing chip to obtain the target feature set, the program includes instructions for executing the following steps:
Performing image quality evaluation on the image to be processed through the video processing chip to obtain a target image quality evaluation value;
When the target image quality evaluation value is in a first preset range, performing image enhancement processing on the image to be processed to obtain a first image, and performing feature extraction on the first image through the video processing chip to obtain the target feature set;
And when the target image quality evaluation value is larger than the upper limit value of the first preset range, extracting the characteristics of the image to be processed through the video processing chip to obtain the target characteristic set.
Optionally, in the aspect of performing image enhancement processing on the image to be processed to obtain the first image, the program includes instructions for performing the following steps:
Performing multi-scale decomposition on the image to be processed to obtain a low-frequency characteristic component and a high-frequency characteristic component;
Determining a target energy ratio between the low frequency characteristic component and the high frequency characteristic component;
When the target energy ratio is in a second preset range, performing global image enhancement processing on the image to be processed to obtain the first image;
When the target energy value is larger than the upper limit value of the second preset range, performing image enhancement processing on the high-frequency characteristic component, and performing reconstruction operation corresponding to the sub-scale decomposition on the high-frequency characteristic component and the low-frequency characteristic component after the image enhancement processing to obtain the first image;
And when the target energy value is smaller than the lower limit value of the second preset range, performing image enhancement processing on the low-frequency characteristic component, and performing reconstruction operation corresponding to the sub-scale decomposition on the low-frequency characteristic component and the high-frequency characteristic component after the image enhancement processing to obtain the first image.
Optionally, in the aspect of performing global image enhancement processing on the image to be processed to obtain the first image, the program includes instructions for performing the following steps:
Determining a target difference between a target image quality evaluation value and a reference image quality evaluation value, the reference image quality evaluation value being greater than an upper limit value of the first preset range;
determining a target image enhancement algorithm identifier corresponding to the target difference value according to a mapping relation between the preset difference value and the image enhancement algorithm identifier;
Dividing the image to be processed into a plurality of areas, and determining the definition of each area in the plurality of areas to obtain a plurality of definitions;
determining target mean square deviations corresponding to the plurality of definitions;
Determining a target fine tuning parameter corresponding to the target mean square error according to a mapping relation between the preset mean square error and the fine tuning parameter;
Acquiring a target image enhancement algorithm corresponding to the target image enhancement algorithm identifier and a reference algorithm control parameter corresponding to the target image enhancement algorithm;
fine tuning the reference algorithm control parameter through the target fine tuning parameter to obtain a target algorithm control parameter;
And carrying out image enhancement processing on the image to be processed according to the target algorithm control parameter and the target image enhancement algorithm to obtain the first image.
Optionally, in the acquiring an image to be processed, the program includes instructions for:
Acquiring target chip parameters of the artificial intelligent chip;
Determining a first shooting parameter set corresponding to the target chip parameter according to a mapping relation between a preset chip parameter and a shooting parameter;
Acquiring a target environment parameter;
determining a second shooting parameter set corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and a shooting parameter;
Determining an intersection between the first shooting parameter set and the second shooting parameter set to obtain a target shooting parameter set;
and shooting according to the target shooting parameter set to obtain the image to be processed.
Fig. 4 is a block diagram showing functional units of an image processing apparatus 400 according to an embodiment of the present application. The image processing apparatus 400 is applied to an electronic device, and the apparatus 400 includes: an acquisition unit 401, an extraction unit 402, and an identification unit 403, wherein,
The acquiring unit 401 is configured to acquire an image to be processed;
the extracting unit 402 is configured to perform feature extraction on the image to be processed through a video processing chip, so as to obtain a target feature set;
The recognition unit 403 is configured to perform image recognition on the target feature set through an artificial intelligent chip, so as to obtain a target recognition result.
It can be seen that, in the image processing device described in the embodiment of the present application, an image to be processed is obtained, feature extraction is performed on the image to be processed through a video processing chip, a target feature set is obtained, and image recognition is performed on the target feature set through an artificial intelligent chip, so that a target recognition result is obtained.
Optionally, in the aspect that the feature extraction is performed on the image to be processed by the video processing chip to obtain a target feature set, the extracting unit 402 is specifically configured to:
Performing image quality evaluation on the image to be processed through the video processing chip to obtain a target image quality evaluation value;
When the target image quality evaluation value is in a first preset range, performing image enhancement processing on the image to be processed to obtain a first image, and performing feature extraction on the first image through the video processing chip to obtain the target feature set;
And when the target image quality evaluation value is larger than the upper limit value of the first preset range, extracting the characteristics of the image to be processed through the video processing chip to obtain the target characteristic set.
Optionally, in the aspect of performing image enhancement processing on the image to be processed to obtain a first image, the extracting unit 402 is specifically configured to:
Performing multi-scale decomposition on the image to be processed to obtain a low-frequency characteristic component and a high-frequency characteristic component;
Determining a target energy ratio between the low frequency characteristic component and the high frequency characteristic component;
When the target energy ratio is in a second preset range, performing global image enhancement processing on the image to be processed to obtain the first image;
When the target energy value is larger than the upper limit value of the second preset range, performing image enhancement processing on the high-frequency characteristic component, and performing reconstruction operation corresponding to the sub-scale decomposition on the high-frequency characteristic component and the low-frequency characteristic component after the image enhancement processing to obtain the first image;
And when the target energy value is smaller than the lower limit value of the second preset range, performing image enhancement processing on the low-frequency characteristic component, and performing reconstruction operation corresponding to the sub-scale decomposition on the low-frequency characteristic component and the high-frequency characteristic component after the image enhancement processing to obtain the first image.
Optionally, in the aspect that the global image enhancement processing is performed on the image to be processed to obtain the first image, the extracting unit 402 is specifically configured to:
Determining a target difference between a target image quality evaluation value and a reference image quality evaluation value, the reference image quality evaluation value being greater than an upper limit value of the first preset range;
determining a target image enhancement algorithm identifier corresponding to the target difference value according to a mapping relation between the preset difference value and the image enhancement algorithm identifier;
Dividing the image to be processed into a plurality of areas, and determining the definition of each area in the plurality of areas to obtain a plurality of definitions;
determining target mean square deviations corresponding to the plurality of definitions;
Determining a target fine tuning parameter corresponding to the target mean square error according to a mapping relation between the preset mean square error and the fine tuning parameter;
Acquiring a target image enhancement algorithm corresponding to the target image enhancement algorithm identifier and a reference algorithm control parameter corresponding to the target image enhancement algorithm;
fine tuning the reference algorithm control parameter through the target fine tuning parameter to obtain a target algorithm control parameter;
And carrying out image enhancement processing on the image to be processed according to the target algorithm control parameter and the target image enhancement algorithm to obtain the first image.
Optionally, in the aspect of acquiring the image to be processed, the acquiring unit 401 is specifically configured to:
Acquiring target chip parameters of the artificial intelligent chip;
Determining a first shooting parameter set corresponding to the target chip parameter according to a mapping relation between a preset chip parameter and a shooting parameter;
Acquiring a target environment parameter;
determining a second shooting parameter set corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and a shooting parameter;
Determining an intersection between the first shooting parameter set and the second shooting parameter set to obtain a target shooting parameter set;
and shooting according to the target shooting parameter set to obtain the image to be processed.
It may be understood that the functions of each program module of the image processing apparatus of the present embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the relevant description of the foregoing method embodiment, which is not repeated herein.
The embodiment of the application also provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program makes a computer execute part or all of the steps of any one of the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform part or all of the steps of any one of the methods described in the method embodiments above. The computer program product may be a software installation package, said computer comprising an electronic device.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. And the aforementioned memory includes: a usb disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing has outlined rather broadly the more detailed description of embodiments of the application, wherein the principles and embodiments of the application are explained in detail using specific examples, the above examples being provided solely to facilitate the understanding of the method and core concepts of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. An image processing method, the method comprising:
acquiring an image to be processed;
extracting features of the image to be processed through a video processing chip to obtain a target feature set;
and carrying out image recognition on the target feature set through an artificial intelligent chip to obtain a target recognition result.
2. The method according to claim 1, wherein the feature extraction of the image to be processed by the video processing chip to obtain a target feature set comprises:
Performing image quality evaluation on the image to be processed through the video processing chip to obtain a target image quality evaluation value;
when the target image quality evaluation value is in a first preset range, performing image enhancement processing on the image to be processed to obtain a first image, and performing feature extraction on the first image through the video processing chip to obtain the target feature set;
And when the target image quality evaluation value is larger than the upper limit value of the first preset range, extracting features of the image to be processed through the video processing chip to obtain the target feature set.
3. The method according to claim 2, wherein performing image enhancement processing on the image to be processed to obtain a first image comprises:
Performing multi-scale decomposition on the image to be processed to obtain a low-frequency characteristic component and a high-frequency characteristic component;
Determining a target energy ratio between the low frequency characteristic component and the high frequency characteristic component;
When the target energy ratio is in a second preset range, performing global image enhancement processing on the image to be processed to obtain the first image;
when the target energy value is larger than the upper limit value of the second preset range, performing image enhancement processing on the high-frequency characteristic component, and performing reconstruction operation corresponding to the sub-scale decomposition on the high-frequency characteristic component and the low-frequency characteristic component after the image enhancement processing to obtain the first image;
And when the target energy value is smaller than the lower limit value of the second preset range, performing image enhancement processing on the low-frequency characteristic component, and performing reconstruction operation corresponding to the sub-scale decomposition on the low-frequency characteristic component and the high-frequency characteristic component after the image enhancement processing to obtain the first image.
4. A method according to claim 3, wherein said performing global image enhancement processing on said image to be processed to obtain said first image comprises:
Determining a target difference between a target image quality evaluation value and a reference image quality evaluation value, the reference image quality evaluation value being greater than an upper limit value of the first preset range;
determining a target image enhancement algorithm identifier corresponding to the target difference value according to a mapping relation between the preset difference value and the image enhancement algorithm identifier;
Dividing the image to be processed into a plurality of areas, and determining the definition of each area in the plurality of areas to obtain a plurality of definitions;
determining target mean square deviations corresponding to the plurality of definitions;
Determining a target fine tuning parameter corresponding to the target mean square error according to a mapping relation between the preset mean square error and the fine tuning parameter;
Acquiring a target image enhancement algorithm corresponding to the target image enhancement algorithm identifier and a reference algorithm control parameter corresponding to the target image enhancement algorithm;
Fine tuning the reference algorithm control parameter through the target fine tuning parameter to obtain a target algorithm control parameter;
And carrying out image enhancement processing on the image to be processed according to the target algorithm control parameter and the target image enhancement algorithm to obtain the first image.
5. The method according to any one of claim 1 to 4, wherein,
The obtaining the image to be processed comprises the following steps:
Acquiring target chip parameters of the artificial intelligent chip;
Determining a first shooting parameter set corresponding to the target chip parameter according to a mapping relation between a preset chip parameter and a shooting parameter;
Acquiring a target environment parameter;
determining a second shooting parameter set corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and a shooting parameter;
determining an intersection between the first shooting parameter set and the second shooting parameter set to obtain a target shooting parameter set;
And shooting according to the target shooting parameter set to obtain the image to be processed.
6. An image processing apparatus, characterized in that the apparatus comprises: an acquisition unit, an extraction unit and an identification unit, wherein,
The acquisition unit is used for acquiring the image to be processed;
the extraction unit is used for extracting the characteristics of the image to be processed through a video processing chip so as to obtain a target characteristic set;
the identification unit is used for carrying out image identification on the target feature set through the artificial intelligent chip so as to obtain a target identification result.
7. The apparatus according to claim 6, wherein, in terms of the feature extraction of the image to be processed by the video processing chip to obtain a target feature set, the extracting unit is specifically configured to:
Performing image quality evaluation on the image to be processed through the video processing chip to obtain a target image quality evaluation value;
When the target image quality evaluation value is in a first preset range, performing image enhancement processing on the image to be processed to obtain a first image, and performing feature extraction on the first image through the video processing chip to obtain the target feature set;
And when the target image quality evaluation value is larger than the upper limit value of the first preset range, extracting the characteristics of the image to be processed through the video processing chip to obtain the target characteristic set.
8. An electronic device comprising a processor, a memory for storing one or more programs and configured to be executed by the processor, the program comprising instructions for performing the steps in the method of any of claims 1-5, the processor comprising a video processing chip or an artificial intelligence chip.
9. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of any of claims 1-5.
10. A computer program, characterized in that,
The computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of any of claims 1-5.
CN202410338655.3A 2021-07-31 2021-07-31 Image processing method and device based on artificial intelligent chip, medium and program Pending CN118334361A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410338655.3A CN118334361A (en) 2021-07-31 2021-07-31 Image processing method and device based on artificial intelligent chip, medium and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110876555.2A CN113807166B (en) 2021-07-31 2021-07-31 Image processing method, device and storage medium
CN202410338655.3A CN118334361A (en) 2021-07-31 2021-07-31 Image processing method and device based on artificial intelligent chip, medium and program

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202110876555.2A Division CN113807166B (en) 2021-07-31 2021-07-31 Image processing method, device and storage medium

Publications (1)

Publication Number Publication Date
CN118334361A true CN118334361A (en) 2024-07-12

Family

ID=78942733

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110876555.2A Active CN113807166B (en) 2021-07-31 2021-07-31 Image processing method, device and storage medium
CN202410338655.3A Pending CN118334361A (en) 2021-07-31 2021-07-31 Image processing method and device based on artificial intelligent chip, medium and program

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110876555.2A Active CN113807166B (en) 2021-07-31 2021-07-31 Image processing method, device and storage medium

Country Status (1)

Country Link
CN (2) CN113807166B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114418824A (en) * 2022-01-27 2022-04-29 支付宝(杭州)信息技术有限公司 Image processing method, device and storage medium
CN114842579B (en) * 2022-04-26 2024-02-20 深圳市凯迪仕智能科技股份有限公司 Intelligent lock, image processing method and related products

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107368791A (en) * 2017-06-29 2017-11-21 广东欧珀移动通信有限公司 Living iris detection method and Related product
CN109801209B (en) * 2019-01-29 2023-12-05 爱芯元智半导体(宁波)有限公司 Parameter prediction method, artificial intelligent chip, equipment and system
CN111160175A (en) * 2019-12-19 2020-05-15 中科寒武纪科技股份有限公司 Intelligent pedestrian violation behavior management method and related product
CN111681164B (en) * 2020-05-29 2023-06-16 广州市盛光微电子有限公司 Device and method for cruising panoramic image in partial end-to-end connection mode
CN111783375A (en) * 2020-06-30 2020-10-16 Oppo广东移动通信有限公司 Chip system and related device

Also Published As

Publication number Publication date
CN113807166B (en) 2024-03-08
CN113807166A (en) 2021-12-17

Similar Documents

Publication Publication Date Title
CN113807166B (en) Image processing method, device and storage medium
CN106650568B (en) Face recognition method and device
CN108734126B (en) Beautifying method, beautifying device and terminal equipment
CN105760851A (en) Fingerprint identification method and terminal
CN110610191A (en) Elevator floor identification method and device and terminal equipment
CN111523479A (en) Biological feature recognition method and device for animal, computer equipment and storage medium
CN111444373B (en) Image retrieval method, device, medium and system thereof
CN111652878B (en) Image detection method, image detection device, computer equipment and storage medium
CN116959113A (en) Gait recognition method and device
CN115115552B (en) Image correction model training method, image correction device and computer equipment
CN109657546B (en) Video behavior recognition method based on neural network and terminal equipment
CN115546554A (en) Sensitive image identification method, device, equipment and computer readable storage medium
CN112950641B (en) Image processing method and device, computer readable storage medium and electronic equipment
CN112084874B (en) Object detection method and device and terminal equipment
CN114549857A (en) Image information identification method and device, computer equipment and storage medium
CN115841437A (en) Image enhancement method, device and equipment
CN113760415A (en) Dial plate generation method and device, electronic equipment and computer readable storage medium
CN111401317A (en) Video classification method, device, equipment and storage medium
CN117152567B (en) Training method, classifying method and device of feature extraction network and electronic equipment
CN114550236B (en) Training method, device, equipment and storage medium for image recognition and model thereof
CN111144427B (en) Image feature extraction method, device, equipment and readable storage medium
CN112749705B (en) Training model updating method and related equipment
CN111666878B (en) Object detection method and device
CN113743308B (en) Face recognition method, device, storage medium and system based on feature quality
CN114612987B (en) Expression recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination