CN116320720B - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN116320720B
CN116320720B CN202310511170.5A CN202310511170A CN116320720B CN 116320720 B CN116320720 B CN 116320720B CN 202310511170 A CN202310511170 A CN 202310511170A CN 116320720 B CN116320720 B CN 116320720B
Authority
CN
China
Prior art keywords
image
information
processed
displayed
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310511170.5A
Other languages
Chinese (zh)
Other versions
CN116320720A (en
Inventor
张建民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Semidrive Technology Co Ltd
Original Assignee
Nanjing Semidrive Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Semidrive Technology Co Ltd filed Critical Nanjing Semidrive Technology Co Ltd
Priority to CN202310511170.5A priority Critical patent/CN116320720B/en
Publication of CN116320720A publication Critical patent/CN116320720A/en
Application granted granted Critical
Publication of CN116320720B publication Critical patent/CN116320720B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The present disclosure provides an image processing method, apparatus, device, and storage medium, which determine scene category information of an image to be processed by inputting the image to be processed into a scene recognition model; determining information to be displayed corresponding to the scene category information according to the scene category information, wherein the information to be displayed comprises state information of the image to be processed and debugging parameter information associated with the scene category information; and superposing the information to be displayed on the preset position of the image to be processed for display, so that the image debugging work is simpler, more efficient and more convenient.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing method, an apparatus, a device, and a storage medium.
Background
Along with the increasing degree of intelligent automobiles, the requirements of the automobile industry on vehicle-mounted cameras are gradually increased.
The vehicle-mounted camera and the smart phone camera have obvious differences, mainly show that the effect of heavier video shooting of the vehicle-mounted camera is caused due to high safety requirements of the vehicle, pay attention to each frame of picture in the video, and challenge shooting performance of the vehicle-mounted camera due to the fact that video content shot by the vehicle-mounted camera is mainly ambient environment information of the vehicle in the driving process and complex environment information such as tunnels, headlamps and night roads in the driving process.
Before a mature vehicle-mounted camera is marketed, thousands of times of debugging can be performed by a debugging engineer, and particularly, the debugging of the shooting effect with complex environmental information in the driving process is more important. In the debugging process, the image is often adjusted by capturing the image information in a file (log) manner or displaying the image information through an image analysis tool end in the prior art. Therefore, the problems of too much debugging information, undefined classification, difficulty in positioning debugging images, difficulty in viewing image information and the like exist.
Disclosure of Invention
The present disclosure provides an image processing method, apparatus, device, and storage medium, to at least solve the above technical problems in the prior art.
According to a first aspect of the present disclosure, there is provided an image processing method, characterized in that the method includes:
inputting an image to be processed into a scene recognition model, and determining scene category information of the image to be processed;
determining information to be displayed corresponding to the scene category information according to the scene category information, wherein the information to be displayed comprises state information of the image to be processed and debugging parameter information associated with the scene category information;
And superposing the information to be displayed on a preset position of the image to be processed for display.
In an embodiment, before the inputting the image to be processed into the scene recognition model and determining the scene category information of the image to be processed, the method further includes:
collecting a video to be processed shot by a camera, wherein the video to be processed consists of a plurality of frames of continuous images;
and acquiring images in the multi-frame continuous images at intervals according to a preset interval frame number to serve as images to be processed.
In an embodiment, before the capturing the video to be processed captured by the camera, the method further includes:
according to the image characteristics of each scene category information, defining the debugging parameter information which needs to be focused by each scene category information.
In an embodiment, after determining the information to be displayed corresponding to the scene category information according to the scene category information, the method further includes:
classifying and caching the debugging parameter information associated with the state information of the image to be processed and the scene category information;
judging whether the current display mode of the image to be processed is a manual selection mode or an automatic display mode;
Correspondingly, the information to be displayed is displayed at the preset position of the image to be processed in a superimposed mode, and the method comprises the following steps:
if the current display mode of the image to be processed is a manual selection mode, displaying the state information and part of information in the debugging parameter information according to the received user operation instruction, and superposing the state information and part of information in the debugging parameter information at a preset position of the image to be processed;
if the current display mode of the image to be processed is the automatic display mode, displaying the state information and the debugging parameter information by default and superposing the state information and the debugging parameter information at a preset position of the image to be processed.
In an embodiment, after the information to be displayed is displayed at the preset position of the image to be processed, the method further includes:
recording information to be displayed of the image to be processed by at least one of the following modes:
embedding the information to be displayed in the image to be processed for recording; or, storing the information to be displayed of the image to be processed into a record file; or, storing the state information of the image to be processed and all the image parameter information into a record file.
According to a second aspect of the present disclosure, there is provided an image processing apparatus including:
The scene acquisition module is used for inputting the image to be processed into the scene recognition model and determining scene category information of the image to be processed;
the information to be displayed determining module is used for determining information to be displayed corresponding to the scene category information according to the scene category information, wherein the information to be displayed comprises state information of the image to be processed and debugging parameter information associated with the scene category information;
and the information display module to be displayed is used for superposing the information to be displayed on the preset position of the image to be processed for display.
In an embodiment, the apparatus further comprises:
the image processing module is used for processing the image to be processed, wherein the image processing module is used for processing the image to be processed, and acquiring the image to be processed shot by the camera before the image to be processed is input into the scene recognition model and the scene category information of the image to be processed is determined, and the image to be processed consists of a plurality of frames of continuous images; and acquiring images in the multi-frame continuous images at intervals according to a preset interval frame number to serve as images to be processed.
In an embodiment, the apparatus further comprises:
the debugging parameter determining module is used for defining debugging parameter information which needs to be focused on by each scene type information according to the image characteristics of each scene type information before the video to be processed which is shot by the acquisition camera.
In an embodiment, the apparatus further comprises:
the mode selection module is specifically configured to:
classifying and caching the debugging parameter information associated with the state information of the image to be processed and the scene category information;
judging whether the current display mode of the image to be processed is a manual selection mode or an automatic display mode;
correspondingly, the information display module to be displayed is specifically configured to:
if the current display mode of the image to be processed is a manual selection mode, displaying the state information and part of information in the debugging parameter information according to the received user operation instruction, and superposing the state information and part of information in the debugging parameter information at a preset position of the image to be processed;
if the current display mode of the image to be processed is the automatic display mode, displaying the state information and the debugging parameter information by default and superposing the state information and the debugging parameter information at a preset position of the image to be processed.
In an embodiment, the apparatus further comprises:
the information storage module is specifically used for:
recording information to be displayed of the image to be processed by at least one of the following modes:
embedding the information to be displayed in the image to be processed for recording; or, storing the information to be displayed of the image to be processed into a record file; or, storing the state information of the image to be processed and all the image parameter information into a record file.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods described in the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the present disclosure.
The image processing method, the device, the equipment and the storage medium are used for determining scene category information of an image to be processed by inputting the image to be processed into a scene recognition model; determining information to be displayed corresponding to the scene category information according to the scene category information, wherein the information to be displayed comprises state information of the image to be processed and debugging parameter information associated with the scene category information; and superposing the information to be displayed on the preset position of the image to be processed for display, so that the image debugging work is simpler, more efficient and more convenient.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Fig. 1A is a schematic implementation flow diagram of an image processing method according to a first embodiment of the disclosure;
fig. 1B is a schematic diagram illustrating an association relationship between scene category information and information to be displayed according to a first embodiment of the present disclosure;
fig. 2A is a schematic flowchart illustrating an implementation of an image processing method according to a first embodiment of the disclosure;
fig. 2B is a simplified flowchart of an image processing method according to a second embodiment of the disclosure;
fig. 3 is a schematic diagram showing the structure of an image processing apparatus according to a third embodiment of the present disclosure;
Fig. 4 shows a schematic diagram of a composition structure of an electronic device according to an embodiment of the disclosure.
Detailed Description
In order to make the objects, features and advantages of the present disclosure more comprehensible, the technical solutions in the embodiments of the present disclosure will be clearly described in conjunction with the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. Based on the embodiments in this disclosure, all other embodiments that a person skilled in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
In the prior art, there are two general ways for debugging an image parameter by a debugging engineer, and the first way is to obtain information related to the image. For example by grabbing a file (log) in which all image parameters of each frame of image in the video are recorded. The debugging engineer needs to locate all image parameters corresponding to the image of interest from a plurality of data, and then finds out the parameters which he needs to adjust from all the image parameters for adjustment. The second way is to analyze all image parameters of the image by analyzing the image to be adjusted at the analyzing tool end, and then find out the parameters to be adjusted from all the image parameters. In the former, since all image parameters of all images are recorded in a file, information is too much and classification is unclear. Therefore, when the image of a certain time point of the video stream is problematic, it is difficult for a debugging engineer to directly and accurately locate the relevant image and individual parameters in the relevant image, and to find information. In the latter case, since the related information of the image is implicitly stored inside the image, the related information of the image needs to be processed according to a specific format by an analysis tool. It is therefore relatively complex and difficult to display the image information for each frame of image on the parsing tool for all relevant information for each frame of image in the video. Accordingly, the present disclosure provides a method of image processing in order to solve the above-described problems, as described in detail below.
Example 1
Fig. 1A is a flowchart of an image processing method according to a first embodiment of the present disclosure, where the method may be performed by an image processing apparatus according to an embodiment of the present disclosure, and the apparatus may be implemented in software and/or hardware. The method specifically comprises the following steps:
s110, inputting the image to be processed into a scene recognition model, and determining scene category information of the image to be processed.
The image to be processed may be an image captured by a camera, or may be an image obtained by processing a video captured by a camera, and scene category information corresponding to the image may be obtained from the image. The present embodiment uses an image obtained by processing a video captured by a camera, for example. The scene recognition model can be any neural network model capable of realizing the scene recognition function. The scene category information may be information that is classified by different scene characteristics presented by the image due to different shooting environments. For example, the present embodiment may be classified according to the scenes encountered by the vehicle during driving, including night scenes, headlamps, tunnels, overpasses, and the like.
Specifically, because the debugging parameter information to be focused on of the image to be processed in different scenes is different, the effect of picture presentation is directly affected by the different debugging parameter information. Therefore, in order to facilitate debugging engineers to directly locate the debugging parameter information concerned by the debugging engineers, the embodiment can classify each scene and determine scene category information corresponding to the image to be processed. Specifically, an image to be processed is input into a scene recognition model to obtain an output result. For example, the to-be-processed image is input into the scene recognition model to obtain scene category information such as whether the to-be-processed image belongs to a night scene, a headlight scene or a tunnel scene.
In the embodiment of the disclosure, before capturing the video to be processed captured by the camera, the method further includes: according to the image characteristics of each scene category information, defining the debugging parameter information which needs to be focused by each scene category information.
The image features may be various scene features in the image, for example, the brightness of a local place is higher due to the presence of a headlight in the image in a headlight scene, the overall brightness of the image in a night scene is lower, and the image in a tunnel scene has not only lower brightness but also tunnel building features. The debug parameter information may be a critical parameter that requires special attention in order for images photographed in various scene environments to present a good quality picture.
In particular, there are hundreds of image parameter information constituting an image to be processed, if each image parameter information is processed, time and effort are not beneficial to operation, and because of the image to be processed aiming at different scenes, image parameters realizing a critical effect are limited, namely, debugging engineers aim at some fixed debugging parameter information frequently used in different scenes. Therefore, the purpose of enabling the pictures of the shot images of all scenes to be more real and clear can be completely achieved by only processing the debugging parameter information. Therefore, according to the image features of various scene category information, the embodiment manually sets the debugging parameter information which needs to be focused by the debugging engineer under the various scene category information, and establishes the connection relationship between the scene category information and the debugging parameter information which needs to be focused. For example, the present embodiment may define that the to-be-processed image of the headlight scene focuses on the group a debug parameter information, and the tunnel scene to-be-processed image focuses on the group B debug parameter information, where the group a debug parameter information and the group B debug parameter information are not exactly the same. Illustratively, a headlight scene may focus on parameters such as display brightness and high dynamic range (High Dynamic Range, HDR), and a tunnel scene may focus on parameters such as display color and HDR.
S120, determining information to be displayed corresponding to the scene category information according to the scene category information.
The information to be displayed comprises state information of the image to be processed and debugging parameter information associated with scene category information.
The information to be displayed refers to information displayed on a display interface, and the information to be displayed comprises state information and debugging parameter information. The state information can be real parameter information automatically calculated by inputting an external environment image captured by a camera in real time into the system, the information changes along with the external environment, and the state information is all image parameters related to the image to be processed. Including for example, luminance, high dynamic range (High Dynamic Range, HDR), color, etc. The debugging parameter information can be parameter information which affects the image display effect greatly under different scene conditions. The display state information is helpful for a debugging engineer to check the real parameters of the image to be processed, and the display of the debugging parameter information is helpful for the debugging engineer to purposefully and directly debug the critical parameters affecting the image quality of the image under the scene type condition. For example, the state information of the present embodiment is described by taking luminance as an example, and when the external environment is relatively bright, the state information of the image to be processed with respect to the luminance is displayed as 500; when the external environment is dark, the state information of the image to be processed with respect to brightness is displayed as 100.
Specifically, after determining the scene category information, the embodiment may determine the state information and the debug parameter information corresponding to the scene category information, as shown in fig. 1B. Fig. 1B is a schematic diagram of an association relationship between scene category information and information to be displayed according to an embodiment of the present disclosure, where the parameter information is debug parameter information. For example, for scenario 1, the present embodiment may display state information 1 and debug parameter information 1; for scenario 2, the present embodiment may display state information 2 and debug parameter information 2, and so on. The debugging engineer can debug the debugging parameter information to adjust the display effect of the image to be processed.
In another embodiment, the debug parameter information may also be set with an optimal reference range, i.e. in order to make the images taken in the respective scene environments present a good quality picture, the present embodiment provides a numerical reference range of critical parameters that it needs to pay special attention to. The image to be processed, such as a headlight scene, focuses on the group a debug parameter information, and each piece of debug parameter information in the group a debug parameter information is provided with an optimal reference range, for example, the optimal reference range of brightness is X-Y. If the state information is larger than the optimal reference range set by the debugging parameter information, the picture quality display effect is poor, and the debugging engineer can adjust the parameter information according to the set optimal reference range. In yet another embodiment, the debug parameter information may be an optimal value directly input by the debug engineer according to previous experience, and the present embodiment is not limited thereto.
And S130, superposing the information to be displayed at a preset position of the image to be processed for display.
The preset position may be a position preset manually for displaying information to be displayed on the image to be processed, for example, may be a boundary position of the display interface.
Specifically, in the prior art, because the debugging parameter information of the image to be processed is unfavorable for searching and inaccurate positioning, the embodiment superimposes the debugging parameter information and the state information associated with the image to be processed on the position of the image to be processed, which is manually preset.
Illustratively, since night scene scenes do not require those data that are highlighted, more attention is paid to these parameters that are relatively low in brightness, and since the information displayed by the display screen is limited. Therefore, the embodiment can display the debug parameter information and the status information related to low brightness, so that the key data is displayed more intensively and conveniently, and the positioning is not needed to be carried out from all the data.
Because the road condition conditions shot by the vehicle in the driving process change in real time, the information to be displayed of each frame of image in the video is different and even can not be converted in time, so the embodiment is necessary to debug the conditions. In particular, for example, the parameters to be displayed on the image to be processed may be a set of parameters a before the vehicle enters the tunnel. After entering the tunnel, the parameters to be displayed on the image to be processed should display a group of parameters B due to scene conversion, but the image display effect after actually entering the tunnel is poor, and the parameters to be displayed are not converted in time, so the embodiment can also debug the problem.
According to the embodiment, the scene type information of the image to be processed is determined by inputting the image to be processed into the scene recognition model; determining information to be displayed corresponding to the scene category information according to the scene category information; the information to be displayed is displayed at the preset position of the image to be processed, the debugging parameter information which is particularly focused by a debugging engineer can be directly positioned without a specific development tool, the debugging engineer can conveniently and directly perform debugging operation on a display interface, and the parameter control of the image to be processed is realized.
In the embodiment of the disclosure, inputting the image to be processed into the scene recognition model, and before determining the scene category information of the image to be processed, further includes: and acquiring the video to be processed, which is shot by the camera, and acquiring images in a plurality of frames of continuous images at intervals according to the preset interval frame number to serve as the image to be processed.
The video to be processed consists of a plurality of frames of continuous images, for example, road condition images shot by a vehicle in the driving process.
The preset interval frame number may be an interval frame number for manually setting and extracting a fixed number of images to be processed from a plurality of frames of continuous images of the video to be processed.
Specifically, when the video is played, the display time of each frame of image is very short, so that human eyes cannot clearly identify information to be displayed on each frame of image, and meanwhile, the difference of parameter information to be displayed between adjacent frames of images is not large. Therefore, in this embodiment, images in the video to be processed, which are shot by the camera, are extracted at intervals according to the preset interval frame number, so as to obtain multi-frame interval images to be processed. For example, the preset interval frame number in the present embodiment may be set to five frames, that is, one to-be-processed image is acquired every five frames in the to-be-processed video.
According to the embodiment, the preset interval frame number is set, and the images are acquired at intervals in the shot video to serve as images to be processed, so that a good visual effect is provided for a debugging engineer, and the debugging engineer is facilitated to check the debugging parameter information for debugging.
In an embodiment of the present disclosure, after displaying the information to be displayed superimposed at the preset position of the image to be processed, the method further includes: recording information to be displayed of an image to be processed by at least one of the following modes: embedding information to be displayed in an image to be processed for recording; or, storing the information to be displayed of the image to be processed into a record file; or, storing the state information of the image to be processed and all image parameter information into a record file.
The overall image parameter information may be overall parameter information for adjusting the effect of the picture color and sharpness, etc. in the image. The recording file may be any file having a recording function for storing related parameter information of an image in a video.
In particular, in this embodiment, for convenience of subsequent review and debugging, any mode may be used to record and store the parameter information of the image to be processed. For example, in one embodiment, the processing may be performed at the video source, and the information to be displayed is embedded into the image to be processed of the video source, so that the parameter information corresponding to the image or the video can be directly seen when the image or the video is opened. In another embodiment, since the total image parameter information of each frame of the image to be processed in the complete video is too huge, and many parameter information is rarely used, if the total image parameter information is saved in the record file, the probability of use is also very low, and the information to be displayed of the image to be processed is often required in the debugging work. Therefore, the embodiment can only store the information to be displayed of the image to be processed in the record file independently, namely, store the state information and the debugging parameter information of the image to be processed for subsequent inquiry. In still another embodiment, since the memory occupied by all the image parameter information of the image to be processed is small, the state information and all the image parameter information of the image to be processed can be stored in the record file separately for subsequent debugging.
Example two
Fig. 2A is a flowchart of an image processing method according to a second embodiment of the present disclosure, where after determining information to be displayed corresponding to scene category information according to the scene category information, the method further includes: classifying and caching debugging parameter information associated with state information and scene category information of an image to be processed; judging whether the current display mode of the image to be processed is a manual selection mode or an automatic display mode; correspondingly, the information to be displayed is displayed at the preset position of the image to be processed in a superimposed manner, and the method comprises the following steps: if the current display mode of the image to be processed is the manual selection mode, displaying part of information in the state information and the debugging parameter information according to the received user operation instruction, and superposing the part of information at a preset position of the image to be processed; if the current display mode of the image to be processed is the automatic display mode, default display state information and debugging parameter information are displayed and are overlapped at the preset position of the image to be processed. The method specifically comprises the following steps:
s210, inputting the image to be processed into a scene recognition model, and determining scene category information of the image to be processed.
S220, determining information to be displayed corresponding to the scene category information according to the scene category information.
S230, classifying and caching debugging parameter information related to the state information and scene category information of the image to be processed.
Specifically, the embodiment can determine the information to be displayed corresponding to the scene type information through the scene type information, and in order to facilitate subsequent debugging, the information to be displayed of the image to be processed is classified and stored in a text form, namely, the state information of the image to be processed and the debugging parameter information associated with the scene type information are classified and stored.
S240, judging whether the current display mode of the image to be processed is a manual selection mode or an automatic display mode.
The current display mode may be a mode for displaying information to be displayed in the running system. The manual selection mode can be a mode of manually selecting parameter information, and the mode can display the debugging parameter information which is particularly focused according to the personal requirements of a debugging engineer more directly and obviously. The automatic display mode belongs to a default display mode of information to be displayed, and can display debugging parameter information which is predefined by experience and corresponds to scene category information according to different image characteristics and scene category information by default, namely a mode that an operating system autonomously selects the debugging parameter information.
S250, if the current display mode of the image to be processed is the manual selection mode, displaying part of information in the state information and the debugging parameter information according to the received user operation instruction, and superposing the part of information at a preset position of the image to be processed.
The user operation instruction may be an operation instruction input by a debug engineer selecting the displayed debug parameter information according to his own needs.
Specifically, the present embodiment may set two modes for the debug engineer to select. If the current display mode selected by the debug engineer is a manual selection mode, the state information corresponding to the image to be processed and the manually selected debug parameter information can be displayed at the preset position of the image to be processed together according to the human demand, so that the influence of the selected debug parameter information on the image to be processed can be more intuitively known. Illustratively, the debug engineer may select the current display mode by selecting a menu at the display interface. If the manual selection mode is selected, the submenu can be continuously displayed, the submenu has debugging parameter information of the image to be processed, and a debugging engineer can select the debugging parameter information to be adjusted for display according to the requirement. Or in another embodiment, the sub-menu has all the image parameter information of the image to be processed, and the debugging engineer can select all the image parameter information to be adjusted for display according to the requirement.
And S260, if the current display mode of the image to be processed is the automatic display mode, displaying state information and debugging parameter information by default and superposing the state information and the debugging parameter information at a preset position of the image to be processed.
Specifically, the present embodiment may set two modes for the debug engineer to select. If the current display mode selected by the debugging engineer is an automatic display mode, the system automatically displays state information and debugging parameter information corresponding to the scene type information at a preset position of the image to be processed according to the scene type information of the image to be processed.
Fig. 2B is a simple flow chart of an image processing method according to an embodiment of the disclosure, and the specific process is as follows:
1) The automobile system is started and normally operates.
2) And after the camera is opened to operate, automatically starting the information display system.
3) And setting whether the information display system displays information to be displayed or not according to the operation of the camera system.
4) And carrying out classified cache on the information to be displayed which needs to be displayed, so that the system can carry out classified display according to the scene.
5) And setting through an interface, automatically executing display or manually selecting display of information to be displayed, and selecting a recording mode.
6) When the automatic display is selected, a scene is determined, for example, which scene (headlight, tunnel, night scene, etc.).
7) And displaying the information to be displayed according to the scene.
8) And storing the required information according to the selection of the information recording mode.
9) The user can use the saved information.
According to the embodiment, the information to be displayed can be selected through two modes of a manual selection mode and an automatic display mode, and key information is directly displayed on an image or a video, so that a debugging engineer can conveniently and rapidly and accurately position the key information for debugging.
Example III
Fig. 3 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure, where the apparatus specifically includes:
the scene acquisition module 310 is configured to input an image to be processed into the scene recognition model, and determine scene category information of the image to be processed;
the to-be-displayed information determining module 320 is configured to determine to-be-displayed information corresponding to scene category information according to the scene category information, where the to-be-displayed information includes status information of an image to be processed and debug parameter information associated with the scene category information;
the information to be displayed display module 330 is configured to superimpose the information to be displayed on a preset position of the image to be processed.
In an embodiment, the image processing apparatus further includes:
the image processing module is used for processing the image to be processed, wherein the image processing module is used for processing the image to be processed, and acquiring the image to be processed shot by the camera before inputting the image to be processed into the scene recognition model and determining scene type information of the image to be processed, and the image to be processed consists of a plurality of frames of continuous images; and acquiring images in the multi-frame continuous images at intervals according to the preset interval frame number to serve as images to be processed.
In an embodiment, the image processing apparatus further includes:
the debugging parameter determining module is used for defining debugging parameter information which needs to be focused on by each scene type information according to the image characteristics of each scene type information before the video to be processed shot by the camera is acquired.
In an embodiment, the image processing apparatus further includes: the mode selection module is specifically configured to: classifying and caching debugging parameter information associated with state information and scene category information of an image to be processed; judging whether the current display mode of the image to be processed is a manual selection mode or an automatic display mode;
correspondingly, the information display module to be displayed is specifically configured to: if the current display mode of the image to be processed is the manual selection mode, displaying part of information in the state information and the debugging parameter information according to the received user operation instruction, and superposing the part of information at a preset position of the image to be processed; if the current display mode of the image to be processed is the automatic display mode, default display state information and debugging parameter information are displayed and are overlapped at the preset position of the image to be processed.
In an embodiment, the image processing apparatus further includes: the information storage module is specifically used for: recording information to be displayed of an image to be processed by at least one of the following modes: embedding information to be displayed in an image to be processed for recording; or, storing the information to be displayed of the image to be processed into a record file; or, storing the state information of the image to be processed and all image parameter information into a record file.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device and a readable storage medium.
Fig. 4 illustrates a schematic block diagram of an example electronic device 400 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 4, the apparatus 400 includes a computing unit 401 that can perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 402 or a computer program loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In RAM 403, various programs and data required for the operation of device 400 may also be stored. The computing unit 401, ROM 402, and RAM 403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Various components in device 400 are connected to I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, etc.; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408, such as a magnetic disk, optical disk, etc.; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the device 400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 401 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 401 performs the respective methods and processes described above, for example, an image processing method. For example, in some embodiments, an image processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 400 via the ROM 402 and/or the communication unit 409. When a computer program is loaded into RAM 403 and executed by computing unit 401, one or more steps of one image processing method described above may be performed. Alternatively, in other embodiments, the computing unit 401 may be configured to perform an image processing method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems-on-a-chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The foregoing is merely specific embodiments of the disclosure, but the protection scope of the disclosure is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the disclosure, and it is intended to cover the scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (8)

1. An image processing method, the method comprising:
inputting an image to be processed into a scene recognition model, and determining scene category information of the image to be processed;
acquiring a video to be processed, which is shot by a camera, and acquiring images in a plurality of frames of continuous images at intervals according to a preset interval frame number to serve as the image to be processed; the video to be processed consists of a plurality of continuous images shot by an automobile in the driving process;
determining information to be displayed corresponding to the scene category information according to the scene category information, wherein the information to be displayed comprises state information of the image to be processed and debugging parameter information associated with the scene category information, the state information is all image parameters related to the image to be processed, and the debugging parameter information is a parameter associated with the scene category information in all the image parameters;
classifying and caching the state information of the image to be processed and the debugging parameter information;
judging whether the current display mode of the image to be processed is a manual selection mode or an automatic display mode;
if the current display mode of the image to be processed is a manual selection mode, displaying the state information and part of information in the debugging parameter information according to the received user operation instruction, and superposing the state information and part of information in the debugging parameter information at a preset position of the image to be processed;
If the current display mode of the image to be processed is judged to be an automatic display mode, the state information and the debugging parameter information are overlapped at a preset position of the image to be processed for display;
and embedding the information to be displayed into an image to be processed in the video to be processed for recording, and storing all image parameter information of the video to be processed into a recording file.
2. The method of claim 1, further comprising, prior to capturing the video to be processed captured by the camera:
according to the image characteristics of each scene category information, defining the debugging parameter information which needs to be focused by each scene category information.
3. The method according to claim 2, further comprising, after displaying the information to be displayed superimposed at a preset position of the image to be processed:
and storing the information to be displayed of the image to be processed into a record file.
4. An image processing apparatus, characterized in that the apparatus comprises:
the scene acquisition module is used for inputting the image to be processed into the scene recognition model and determining scene category information of the image to be processed;
The image processing module is used for processing the image to be processed, wherein the image processing module is used for processing the image to be processed, and acquiring the image to be processed shot by the camera before the image to be processed is input into the scene recognition model and the scene category information of the image to be processed is determined, and the image to be processed consists of a plurality of frames of continuous images; acquiring images in the multi-frame continuous images at intervals according to a preset interval frame number to serve as images to be processed;
the information to be displayed determining module is used for determining information to be displayed corresponding to the scene category information according to the scene category information, wherein the information to be displayed comprises state information of the image to be processed and debugging parameter information associated with the scene category information, the state information is all image parameters related to the image to be processed, and the debugging parameter information is a critical parameter associated with the scene category information in all the image parameters;
the mode selection module is specifically used for classifying and caching the state information of the image to be processed and the debugging parameter information associated with the scene category information; judging whether the current display mode of the image to be processed is a manual selection mode or an automatic display mode;
The information display module to be displayed is specifically used for: if the current display mode of the image to be processed is a manual selection mode, displaying the state information and part of information in the debugging parameter information according to the received user operation instruction, and superposing the state information and part of information in the debugging parameter information at a preset position of the image to be processed; if the current display mode of the image to be processed is the automatic display mode, displaying the state information and the debugging parameter information by default and superposing the state information and the debugging parameter information at a preset position of the image to be processed; and the information storage module is used for embedding the information to be displayed into the image to be processed for recording, and storing all image parameter information of the video to be processed into a record file.
5. The apparatus as recited in claim 4, further comprising:
the debugging parameter determining module is used for defining debugging parameter information which needs to be focused on by each scene type information according to the image characteristics of each scene type information before the video to be processed which is shot by the acquisition camera.
6. The apparatus of claim 5, wherein the apparatus further comprises:
the information storage module is specifically further configured to store information to be displayed of the image to be processed into a record file.
7. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-3.
8. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-3.
CN202310511170.5A 2023-05-08 2023-05-08 Image processing method, device, equipment and storage medium Active CN116320720B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310511170.5A CN116320720B (en) 2023-05-08 2023-05-08 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310511170.5A CN116320720B (en) 2023-05-08 2023-05-08 Image processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116320720A CN116320720A (en) 2023-06-23
CN116320720B true CN116320720B (en) 2023-09-29

Family

ID=86790830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310511170.5A Active CN116320720B (en) 2023-05-08 2023-05-08 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116320720B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111416940A (en) * 2020-03-31 2020-07-14 维沃移动通信(杭州)有限公司 Shooting parameter processing method and electronic equipment
CN113011328A (en) * 2021-03-19 2021-06-22 北京百度网讯科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN115334250A (en) * 2022-08-09 2022-11-11 阿波罗智能技术(北京)有限公司 Image processing method and device and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101049718B1 (en) * 2008-12-29 2011-07-19 에스케이 텔레콤주식회사 How to perform software separation, device, and computer-readable recording media

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111416940A (en) * 2020-03-31 2020-07-14 维沃移动通信(杭州)有限公司 Shooting parameter processing method and electronic equipment
CN113011328A (en) * 2021-03-19 2021-06-22 北京百度网讯科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN115334250A (en) * 2022-08-09 2022-11-11 阿波罗智能技术(北京)有限公司 Image processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN116320720A (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN109951627B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108881781B (en) Method and device for determining resolution in video call process
US11967093B2 (en) Light color identifying method and apparatus of signal light, and roadside device
CN104793742A (en) Shooting previewing method and device
CN114782286A (en) Defect repairing method, optical repairing device, electronic device and storage medium
US11715372B2 (en) Signal lamp recognition method, device, and storage medium
CN112449115B (en) Shooting method and device and electronic equipment
CN116320720B (en) Image processing method, device, equipment and storage medium
CN115134533B (en) Shooting method and equipment for automatically calling vehicle-mounted image acquisition device
CN112685269A (en) Data acquisition method and device, electronic equipment and storage medium
CN111835937A (en) Image processing method and device and electronic equipment
CN116188846A (en) Equipment fault detection method and device based on vibration image
CN111553283B (en) Method and device for generating model
CN113011328B (en) Image processing method, device, electronic equipment and storage medium
CN104618646A (en) Shooting method
CN112148279B (en) Log information processing method, device, electronic equipment and storage medium
CN111695491B (en) Method and device for detecting pedestrians
CN114037763A (en) Target object missing detection method and device, storage medium and electronic equipment
CN114067246A (en) Video annotation method, video annotation device, video model testing method, video annotation model testing device, electronic equipment and storage medium
JP2022006180A (en) Hand shaking correction method of image, device, electronic device, storage media, computer program product, roadside machine and cloud control platform
CN108647097B (en) Text image processing method and device, storage medium and terminal
CN112306602A (en) Timing method, timing device, electronic equipment and storage medium
CN111797933B (en) Template matching method, device, electronic equipment and storage medium
CN113612931B (en) Method, device and equipment for controlling flash lamp based on cloud mobile phone and storage medium
CN106534666A (en) Phase focusing method and device, and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant