CN115758005A - Large-screen image processing method, device and medium - Google Patents

Large-screen image processing method, device and medium Download PDF

Info

Publication number
CN115758005A
CN115758005A CN202211445705.5A CN202211445705A CN115758005A CN 115758005 A CN115758005 A CN 115758005A CN 202211445705 A CN202211445705 A CN 202211445705A CN 115758005 A CN115758005 A CN 115758005A
Authority
CN
China
Prior art keywords
image
component
screen
trained
displayed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211445705.5A
Other languages
Chinese (zh)
Inventor
谢浩杰
王帅
李耀
赵慧婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Unicom Digital Technology Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Unicom Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd, Unicom Digital Technology Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202211445705.5A priority Critical patent/CN115758005A/en
Publication of CN115758005A publication Critical patent/CN115758005A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application provides a method, a device and a medium for processing a large-screen image, which are used for acquiring a picture to be displayed on a large screen; adopting a pre-configured image identification model to identify a picture to be displayed and acquiring large-screen image configuration parameters corresponding to the picture to be displayed; and triggering the visualization platform to acquire the displayed components corresponding to the large-screen image configuration parameters, and displaying the displayed components on the large screen. Compared with the prior art, the configuration parameters of the large-screen image corresponding to the picture to be displayed are automatically identified by using the image identification model, the configuration parameters are converted into external data conforming to the visualization platform, and the external data are processed by the rendering engine of the visualization platform, so that the large-screen image can be generated. The components in the picture to be displayed are not required to be developed and processed one by technicians, and the large-screen image is automatically generated, so that the workload of the technicians is reduced, and the processing efficiency of the complex picture to be displayed is improved.

Description

Large-screen image processing method, device and medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a medium for processing a large-screen image.
Background
With the development of big data technology, the application of the large visual screen in various industries is more and more extensive, especially in business scenes of industries such as business, finance and manufacturing, the large visual screen has multiple functions of daily monitoring, analysis and judgment, emergency command, display and report and the like, and plays an important role in improving scientific management.
In the prior art, the visual large-screen technology mainly packages visual large-screen components (charts, maps and the like) in scripts in advance through technicians based on an existing front-end technical framework, then selects scripts of the components corresponding to a UI design drawing according to the UI design drawing, and compiles the scripts to generate the corresponding visual large-screen; or the technician cuts and exports the components in the UI design drawing by using a drawing cutting technology, and then rearranges the constructed components into a webpage by using a related technology, thereby obtaining the visual large-screen image.
However, in the prior art, no matter the method of packaging the visualization components in advance or the method of redesigning the corresponding components by using the graph cutting technology to generate the visualization large-screen image all depends on the related technical personnel, and the workload is heavy and the efficiency is low for designing the UI graph which is relatively complex.
Disclosure of Invention
The application provides a method, a device and a medium for processing a large-screen image, which are used for solving the problem of low processing efficiency of realizing the visualization of the large-screen image manually.
In a first aspect, the present application provides a method for processing a large-screen image, including:
acquiring a picture to be displayed on a large screen;
adopting a pre-configured image identification model to identify the picture to be displayed, and acquiring large-screen image configuration parameters corresponding to the picture to be displayed;
and triggering a visualization platform to acquire a display component corresponding to the large-screen image configuration parameters, and displaying the display component on the large screen.
In a specific embodiment, obtaining the preconfigured image recognition model comprises:
acquiring an image displayed on the large screen;
for each image, adopting a preset candidate frame, and performing labeling processing on the image to obtain components and component categories selected by the candidate frame in the image and coordinate information of the candidate frame;
respectively taking the components and component categories selected by the candidate frames in each image and the coordinate information of the candidate frames as data to be trained;
and training the initially configured image recognition model by adopting the data to be trained so as to obtain the pre-configured image recognition model.
In a specific embodiment, the method further comprises:
grouping the data to be trained to obtain a first group of data sets to be trained and a second group of data sets to be trained;
then, the training the initially configured image recognition model by using the data to be trained to obtain the preconfigured image recognition model includes:
training the initially configured image recognition model by adopting the first group of data sets to be trained to obtain a trained image recognition model;
and verifying the trained image recognition model by adopting the second group of data sets to be trained, and if the verification is successful, taking the image recognition model which is successfully verified as the pre-configured image recognition model.
In a specific embodiment, the triggering a visualization platform to obtain a display component corresponding to the configuration parameter of the large-screen image, and display the display component on the large-screen image includes:
triggering the visualization platform to perform format conversion processing on the large-screen image configuration parameters according to a pre-configured format so as to obtain converted large-screen image configuration parameters;
and calling a display component corresponding to the component position information, the component type, the component size and the component color type in the converted large-screen image configuration parameters, and displaying the display component on the large screen.
In a specific embodiment, the retrieving a display component corresponding to component position information, a component type, a component size, and a component color type in the converted large-screen image configuration parameter, and displaying the display component on the large-screen image includes:
calling a display component corresponding to component position information, component type, component size and component color type in the converted large-screen image configuration parameters;
and rendering the color matching of the display assembly according to a pre-stored UI color set in the visualization platform, and displaying the rendered display assembly on the large screen.
In a second aspect, the present application provides an apparatus for processing a large-screen image, including:
the acquisition module is used for acquiring a picture to be displayed on a large screen;
the processing module is used for identifying the picture to be displayed by adopting a pre-configured image identification model and acquiring large-screen image configuration parameters corresponding to the picture to be displayed;
the processing module is further used for triggering a visualization platform to acquire a display component corresponding to the large-screen image configuration parameter and displaying the display component on the large screen.
In a specific embodiment, the processing module is specifically configured to:
acquiring an image displayed on the large screen;
for each image, adopting a preset candidate frame, and performing labeling processing on the image to obtain components and component categories selected by the candidate frame in the image and coordinate information of the candidate frame;
respectively taking the components and component categories selected by the candidate frames in each image and the coordinate information of the candidate frames as the data to be trained;
and training the initially configured image recognition model by adopting the data to be trained so as to obtain the pre-configured image recognition model.
In a specific embodiment, the processing module is specifically configured to:
grouping the data to be trained to obtain a first group of data sets to be trained and a second group of data sets to be trained;
training the initially configured image recognition model by adopting the first group of data sets to be trained to obtain a trained image recognition model;
and verifying the trained image recognition model by adopting the second group of data sets to be trained, and if the verification is successful, taking the image recognition model which is successfully verified as the pre-configured image recognition model.
In a specific embodiment, the processing module is specifically configured to:
triggering the visualization platform to perform format conversion processing on the large-screen image configuration parameters according to a pre-configured format so as to obtain converted large-screen image configuration parameters;
and calling a display component corresponding to the component position information, the component type, the component size and the component color type in the converted large-screen image configuration parameters, and displaying the display component on the large screen.
In a specific embodiment, the processing module is specifically configured to:
calling a display component corresponding to component position information, component type, component size and component color type in the converted large-screen image configuration parameters;
and rendering the color matching of the display assembly according to a pre-stored UI color set in the visualization platform, and displaying the rendered display assembly on the large screen.
In a third aspect, the present application provides a server, including: a processor, and a memory communicatively coupled to the processor;
the memory stores computer execution instructions;
the processor executes computer-executable instructions stored by the memory to implement the method of any of the preceding claims.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement a method as in any of the preceding claims.
The application provides a method, a device and a medium for processing a large-screen image, which are used for acquiring a picture to be displayed on a large screen; adopting a pre-configured image identification model to identify the picture to be displayed, and acquiring large-screen image configuration parameters corresponding to the picture to be displayed; and triggering a visualization platform to acquire a display component corresponding to the large-screen image configuration parameters, and displaying the display component on the large screen. Compared with the prior art, the large-screen image configuration parameters corresponding to the picture to be displayed are automatically identified based on the pre-configured image identification model, the configuration parameters of the large-screen image are converted into external data conforming to the visualization platform, and the external data are processed through the rendering engine of the visualization configuration platform, so that the large-screen image can be generated. The technical personnel are not required to develop and process the components in the picture to be displayed one by one, the processing workload of the technical personnel is greatly reduced, and the processing efficiency of the complex picture to be displayed is improved by automatically generating the large-screen image.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic diagram of a flow of a method for processing a large-screen image according to the present application;
fig. 2 is a schematic diagram of a flow of another processing method for large-screen images provided in the present application;
FIG. 3 is a schematic diagram illustrating a flow of another method for processing a large-screen image according to the present application;
FIG. 4 is a schematic diagram illustrating a flow of another method for processing a large-screen image according to the present application;
fig. 5 is a schematic diagram of a structure of a large-screen image processing device provided in the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments that can be made by one skilled in the art based on the embodiments in the present application in light of the present disclosure are within the scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and claims of this application and in the preceding drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be implemented in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
With the development of science and technology, various industries increasingly pay more attention to the analysis and processing of data so as to flexibly adjust the business of each industry according to the analysis and processing result. Therefore, how to intuitively express the result of the analysis processing so that the service manager can efficiently adjust the service becomes a research hotspot in the current field.
At present, a data analysis processing result is usually displayed by using a large visual screen, and the large visual screen is applicable to various fields. The realization of the large visual screen generally depends on the technical staff to use a front-end development technical framework to develop the contents to be displayed on the large visual screen, then the information in the background database is connected, and the data in the background database is displayed on the large visual screen in a preset form; or, by using the image cutting technology, optionally, the image to be displayed may be segmented by using a PS (Adobe Photoshop, image processing software) to acquire each component in the image to be displayed, and then, the components are subjected to webpage reconstruction to realize display of the visualized large-screen image.
However, in the prior art, a lot of processing work is required to be performed no matter a technician performs development directly according to an image to be displayed or performs development based on a cut component to realize display of a visual large screen; technicians also need to do a large amount of repetitive work when the sizes or color matching of the components are not consistent; even when the visual large-screen image is finely adjusted, technicians are likely to need to develop the visual large-screen image again, so that the processing efficiency of the visual large-screen image is not high, and then managers in the industry cannot timely adjust the service.
Based on the technical problem, the invention idea of the application is as follows: how to design a processing method for efficiently displaying a visual large-screen image.
Hereinafter, the technical means of the present application will be described in detail by specific examples. It should be noted that the following specific embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 1 is a schematic diagram of a flow of a processing method for a large-screen image provided in the present application, as shown in fig. 1, the processing method includes:
step 101, obtaining a picture to be displayed on a large screen.
In this embodiment, it should be noted that the method provided by the present application may be carried or installed in a server, and the method provided by the present application may be applied to the aspects of data analysis and processing in various industries, as well as emergency command, display report, and the like.
Specifically, the server obtains a to-be-displayed picture meeting the user requirement, and optionally, the to-be-displayed picture includes: pie charts, bar charts, maps, and the like.
In an optional implementation manner, the server may obtain the picture to be displayed that meets the user requirement through the user terminal, specifically, the user terminal may be a mobile phone or a computer, and based on a network communication technology, the user terminal may establish a communication connection with the server.
After the server acquires the picture to be displayed, the server triggers a large-screen image processing method in the server, and processes the picture to be displayed according to the steps shown below.
And 102, identifying the picture to be displayed by adopting a pre-configured image identification model, and acquiring large-screen image configuration parameters corresponding to the picture to be displayed.
Specifically, the server prestores a preconfigured image recognition model, where the model is used to recognize components and information of the components in the picture to be displayed, and the image recognition model is obtained by a person skilled in the art through neural network training, and optionally, the used neural network model includes but is not limited to: YOLOv4 neural networks.
More specifically, after receiving the picture to be displayed, the server calls a pre-configured image recognition model, transmits the picture to be displayed to the pre-configured image recognition model for recognition processing, and acquires the image parameters of the picture to be displayed.
The picture to be displayed is characterized by the graphic parameters of the picture to be displayed, and the graphic parameters comprise: the method comprises the steps of displaying a component of a picture to be displayed, the component type of the picture to be displayed, the position of the component of the picture to be displayed in a candidate frame of the picture to be displayed, the component color of the picture to be displayed and the like.
It should be noted that the components of the picture to be displayed specifically refer to a type of the picture component to be displayed, such as a pie chart, a line chart, a sector chart, a map, and the like, while the type of the picture component to be displayed specifically refers to a type of the picture component to be displayed, such as a pie chart, the external light shape of the type is a circle, and the inside of the circle includes a plurality of straight lines.
After the server obtains the image parameters of the picture to be displayed, the following step 103 is continuously executed to complete the display processing of the large visual screen.
And 103, triggering the visualization platform to acquire the displayed components corresponding to the large-screen image configuration parameters, and displaying the displayed components on the large screen.
Specifically, the server has a pre-stored configured visualization platform, including: the visualization platform comprises functional modules such as a data analysis module, a component configuration module, a style configuration module and a preview publishing module, wherein the functional modules are used for generating a visualized large-screen image, and the visualization platform is developed in advance by related technicians in the field. And the visualization may be adapted to one or a combination of the modes shown below.
The first mode is as follows: a drag mode. Specifically, the visualization platform has a visualization interface convenient for the user to use, and the visualization interface includes: the user can drag the components to be displayed in corresponding areas in a dragging mode and place the components at corresponding positions, then color matching options of the components are selected, and when the user clicks a determination button on a visual interface, correspondingly, after receiving information of the selected components, the visual platform calls functional modules in the visual platform to process the information, a visual large-screen image is generated, and the visual large-screen image is displayed on corresponding display equipment.
Mode two, external interface mode. Specifically, the visualization platform is provided with an external data interface, and can receive data information corresponding to a pre-compiled visualization large-screen image, and the data format of the data information needs to meet the requirements of a visualization configuration platform, such as a Json data format, and the received data is analyzed and compiled to generate a corresponding visualization large-screen image which is displayed on a preset display device.
Mode three, image parameter mode. Specifically, after receiving the image parameters of the picture to be displayed, the visual configuration platform triggers a data analysis module in the visual configuration platform to perform data format standardization processing on the image parameters; then, the normalized image parameters are respectively transmitted to a component configuration module and a style configuration module according to the categories, the component configuration module and the style configuration module acquire source codes corresponding to the categories of the image parameters in a pre-stored source code database according to the image parameters, and the acquired source codes are stored in corresponding directories; and finally, triggering the preview release module to run and compile the source codes to generate a visual large-screen image, and sending the acquired image to the specified display equipment.
It should be noted that the method provided by this embodiment is executed according to the processing manner of the mode three, in an actual scenario, the visualization platform provided by this embodiment may be processed according to the processing manner provided by the mode one, the mode two, and/or the mode three, and the mode one and the mode two may be used as the auxiliary mode of the mode three.
In this embodiment, a method for processing a visualized large-screen image is provided, where a picture to be displayed on a large screen is obtained, a pre-configured image recognition model is used to perform recognition processing on the picture to be displayed, a large-screen image configuration parameter corresponding to the picture to be displayed is obtained, a visualization platform is triggered to obtain a displayed component corresponding to the large-screen image configuration parameter, and the displayed component is displayed on the large screen in an image display manner. Compared with the prior art, the processing method provided by the application is utilized to automatically identify the image parameters of the picture to be displayed, then the visualization platform is triggered to automatically process the image parameters to generate the visualization large-screen image, and the image is displayed on the appointed display equipment, so that the workload of technical personnel for developing the visualization large-screen is greatly reduced, and particularly the processing efficiency is improved for the complex and complicated picture to be displayed.
Fig. 2 is a schematic diagram of another flow of a processing method for a large-screen image provided in the present application, and on the basis of the foregoing embodiment, the present embodiment describes in detail how to configure an image recognition model. The specific process of the configuration is as follows:
step 201, acquiring an image displayed on a large screen.
Specifically, when configuring the image recognition model, a large number of large-screen visual images need to be acquired first. Optionally, the image displayed on the large screen may be acquired by using a crawler technology, the range of the number of the acquired images may be about 8000-10000, and then the acquired images are preprocessed by using an image enhancement technology, for example, using Mosaic, that is, the images are randomly scaled, randomly cropped, randomly arranged and spliced, so that the sizes of the pixels of the processed images are consistent.
The server then proceeds as shown in step 202 below.
Step 202, for each image, a preset candidate frame is adopted to label the image so as to obtain the components and component categories selected by the candidate frame in the image and the coordinate information of the candidate frame.
Specifically, the server calls a picture Labeling tool, such as Labeling, to label the preprocessed image, that is, after identifying the components in the preprocessed image, labeling each identified component with a preset candidate frame, such as a rectangular frame, and after selecting each component and the component category corresponding to each component in the frame, generating coordinate information of the candidate frame for Labeling each component, where the coordinate information may exist in a ". Xml" format.
Wherein, the coordinate information includes: the position of the candidate frame, the position of the component, the screenshot size, the color information of the component and the like, the coordinate information can be used for avoiding directly processing the image marked by the candidate frame, namely the coordinate information is used for representing the image marked by the candidate frame, and the processing efficiency can be improved.
More specifically, in order to further improve the processing efficiency, the embodiment may store the generated coordinate information in a corresponding first folder, and correspondingly store the preprocessed image in a second folder, where the file directories in the first folder correspond to the file directories in the second folder one by one.
In addition, the address paths of the first folder and the second folder can be generated, so that subsequent processing operation is more convenient and faster.
After the pre-processing and labeling processing are performed on the acquired image, the processing is performed as follows in step 202.
Step 203, respectively using the components and component categories selected by the candidate frames in each image and the coordinate information of the candidate frames as data to be trained.
And step 204, training the initially configured image recognition model by adopting the data to be trained so as to obtain the pre-configured image recognition model.
Specifically, the components and component categories selected by the candidate frames and the coordinate information extracted by the candidate frames in the above steps are used as a data set to be trained, that is, a training data set including parameters such as components, component categories, component colors, and the like.
More specifically, the server pre-stores an initially configured image recognition model, optionally, the initially configured image recognition model may be one of target detection neural network models such as YOLOv4, in other optional embodiments, another target detection model may also be used as the initially configured image recognition model, and this embodiment is not limited in this embodiment.
Further, according to the processing manner in step 202, the data set to be trained may be represented by the mentioned address path, and correspondingly, the server may directly invoke the address path, input the address path into the initially configured image recognition model, and perform training to obtain the pre-configured image recognition model.
In this example, a specific processing step of how to obtain the preconfigured image recognition model is specifically explained, and by training the image recognition model for recognizing the picture to be displayed, the processing workload of a technician can be reduced, and the processing efficiency of a large-screen image can be improved.
Fig. 3 is a schematic diagram of a flow of another processing method for large-screen images provided by the present application, where on the basis of the configuration processing process for the image recognition model shown in fig. 2, after the server acquires the data set to be trained, in an optional implementation, as shown in fig. 3, the configuration processing process further includes:
step 301, acquiring an image displayed on a large screen.
Step 302, for each image, a preset candidate frame is adopted to label the image, so as to obtain the components and component categories selected by the candidate frame in the image, and coordinate information of the candidate frame.
And step 303, respectively taking the components and component categories selected by the candidate frames in each image and the coordinate information of the candidate frames as data to be trained.
Specifically, the processing principle of steps 301 to 303 in this embodiment is the same as the processing principle of steps 201 to 203 in the above embodiment, and this embodiment will not be described again.
And step 304, grouping the data to be trained to obtain a first group of data sets to be trained and a second group of data sets to be trained.
Specifically, after the server acquires the data set to be trained, in order to make the configured image recognition model more accurate, the data set to be trained is divided into a first group of data sets to be trained used for training and a second group of training data sets used for testing according to a preset proportion, such as 7:3.
Step 305, training the initially configured image recognition model by using the first group of data sets to be trained to obtain the trained image recognition model.
Specifically, the server inputs a first group of data sets to be trained into an initially configured image recognition model, and performs training processing according to preset training parameters such as batch processing times, learning rate and iteration times to obtain the trained image recognition model.
And step 306, adopting a second group of data sets to be trained to verify the trained image recognition model, and if the verification is successful, taking the image recognition model which is successfully verified as a pre-configured image recognition model.
Specifically, after the trained image recognition model is obtained, in order to ensure the accuracy of the trained image recognition model, a second group of data sets to be trained is adopted for test processing, that is, the second group of data sets to be trained is input into the trained image recognition model, and the accuracy of the trained image recognition model is verified.
More specifically, a training result of the trained image recognition model on a second group of data sets to be trained is obtained, the matching degree of the current training result and the second group of data sets to be trained is judged, and if the matching degree of the current training result and the second group of data sets to be trained reaches 90% or more, the trained image recognition model is determined to be used as a pre-configured image recognition model.
Optionally, when the matching degree between the training result and the second group of data sets to be trained does not reach 90%, the training parameter adjustment processing needs to be performed on the trained image recognition model, so that the matching degree between the training result and the second group of data sets to be trained reaches 90% or more.
The server triggers a training parameter adjusting program to process the trained image recognition model, namely, corresponding moving step lengths are set according to preset value ranges of all training parameters, the value of each training parameter is subjected to value testing processing until the matching degree of a training result and a second group of data sets to be trained reaches 90% or above, at the moment, the training parameters of the trained image recognition model are updated by the current values of all training parameters, and the image recognition model with the updated training parameters is used as a pre-configured image recognition model.
In this embodiment, how to enable a preconfigured image recognition model to be more accurate is specifically explained, when a model of a recognized image is more accurate, the occurrence of error conditions can be reduced, and then the processing workload of a technician can be reduced, if an assembly of the model recognition is inaccurate, the technician needs to perform manual fine adjustment.
Fig. 4 is a schematic diagram of a flow of another processing method for large-screen images provided by the present application, and on the basis of the embodiments shown in fig. 1 to fig. 3, the present embodiment specifically explains a specific processing method after a server acquires image parameters of a picture to be displayed, and as shown in fig. 4, the processing method includes:
step 401, triggering the visualization platform to perform format conversion processing on the large-screen image configuration parameters according to the pre-configured format, so as to obtain the converted large-screen image configuration parameters.
Specifically, after acquiring the image parameters of the picture to be displayed, the server sends the image parameters to a preconfigured visualization platform for processing. Correspondingly, after receiving the image parameters, the visualization platform triggers a data analysis module in the visualization platform to analyze the image parameters, optionally performs data conversion processing on field names corresponding to parameters such as components, component types and component colors in the image parameters, and converts data which do not conform to the visualization platform format in the field names into a format of a pre-configured value so as to facilitate processing by other modules in the visualization platform.
Step 402, calling a display component corresponding to the component position information, the component type, the component size and the component color type in the converted large-screen image configuration parameters, and displaying the display component on a large screen.
Specifically, after the data analysis module completes processing, the visualization platform sends the converted large-screen image configuration parameters to the component configuration module and the style configuration module respectively according to the field names of the image parameters, and if: the components and the parameters related to the component types are sent to the component configuration module, and the information such as the positions, sizes and colors of the components is sent to the style configuration module to be correspondingly processed, so that the pictures to be displayed are displayed on a visual large screen to be displayed.
More specifically, the visualization platform calls a display component corresponding to component position information, component type, component size and component color type in the converted large-screen image configuration parameters; and according to a pre-stored UI color set in the visualization platform, performing rendering processing on the color matching of the display assembly, and performing image display on the display assembly subjected to the rendering processing on a large screen.
Furthermore, a source code database for displaying each component and other related information (component type, component color, component size, and component position) of the component is prestored in the visualization platform. The corresponding component configuration module and the corresponding style configuration module are used for calling source codes corresponding to the image parameters in a source code database according to the field names of the received image parameters, triggering a rendering engine of the visualization platform to run and compiling the obtained source codes, further obtaining display components corresponding to the image parameters, then rendering the colors of the display components by combining UI color set information prestored in the visualization platform, determining the color matching of the component colors for each display component, and displaying a large-screen image on a specified display device.
Meanwhile, the visualization platform is also connected to a database for displaying data so as to display the displayed data in a form of a visualized large-screen image, and the change condition of the data is seen in real time so as to facilitate management personnel to perform business adjustment processing.
In this embodiment, a specific processing step of how to display the acquired image parameters of the to-be-displayed picture in the form of a large-screen image on the visualization platform is specifically explained, the large-screen image is automatically displayed according to the image parameters, and the large-screen image changes according to the change of the to-be-displayed data, so that the processing workload of technicians is reduced, and the efficiency of processing the large-screen image for display is improved.
Fig. 5 is a schematic structural diagram of a large-screen image processing apparatus provided in the present application, corresponding to the method for processing a large-screen image in the present application. For ease of illustration, only the portions relevant to the present application are shown.
As shown in fig. 5, the processing apparatus 50 includes: an acquisition module 501 and a processing module 502.
The acquiring module 501 is configured to acquire a picture to be displayed on a large screen; the processing module 502 is configured to use a preconfigured image recognition model to perform recognition processing on a picture to be displayed, and obtain a large-screen image configuration parameter corresponding to the picture to be displayed; the processing module 502 is further configured to trigger the visualization platform to obtain a displayed component corresponding to the large-screen image configuration parameter, and display the displayed component on the large screen.
Optionally, the processing module 502 is specifically configured to:
acquiring an image displayed on a large screen;
for each image, adopting a preset candidate frame to label the image so as to obtain components and component categories selected by the candidate frame in the image and coordinate information of the candidate frame;
respectively taking the components and component categories selected by the candidate frames in each image and the coordinate information of the candidate frames as data to be trained;
and training the initially configured image recognition model by adopting the data to be trained so as to obtain the pre-configured image recognition model.
Optionally, the processing module 502 is specifically configured to:
grouping data to be trained to obtain a first group of data sets to be trained and a second group of data sets to be trained;
training an initially configured image recognition model by adopting a first group of data sets to be trained to obtain a trained image recognition model;
and verifying the trained image recognition model by adopting a second group of data sets to be trained, and if the verification is successful, taking the image recognition model which is successfully verified as a pre-configured image recognition model.
Optionally, the processing module 502 is specifically configured to:
triggering the visualization platform to perform format conversion processing on the large-screen image configuration parameters according to the pre-configured format so as to obtain the converted large-screen image configuration parameters;
and calling a display component corresponding to the component position information, the component type, the component size and the component color type in the converted large-screen image configuration parameters, and displaying the display component on the large screen.
Optionally, the processing module 502 is specifically configured to:
calling a display component corresponding to component position information, component type, component size and component color type in the converted large-screen image configuration parameters;
and rendering the color matching of the display assembly according to a pre-stored UI color set in the visualization platform, and displaying the rendered display assembly on a large screen.
In this embodiment, a processing apparatus for a large-screen image is provided, and the processing apparatus includes an obtaining module, configured to obtain a picture to be displayed on a large screen; the processing module is used for identifying the picture to be displayed by adopting a pre-configured image identification model and acquiring large-screen image configuration parameters corresponding to the picture to be displayed; and the processing module is also used for triggering the visualization platform to acquire the displayed components corresponding to the large-screen image configuration parameters and displaying the displayed components on the large screen. Compared with the prior art, the method has the advantages that technical personnel are not required to develop and process the components in the picture to be displayed one by one, and the large-screen image is automatically generated, so that the workload of the technical personnel is reduced, and the processing efficiency of the complex picture to be displayed is improved.
The present embodiment further provides a server, including: a processor and a memory communicatively coupled to the processor. Wherein the memory stores computer execution instructions; the processor executes the computer execution instructions stored in the memory to implement the technical solutions provided by any of the foregoing embodiments.
The embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the technical solution provided by any of the foregoing embodiments is implemented.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (12)

1. A method for processing a large-screen image is characterized by comprising the following steps:
acquiring a picture to be displayed on a large screen;
adopting a pre-configured image identification model to identify the picture to be displayed, and acquiring large-screen image configuration parameters corresponding to the picture to be displayed;
and triggering a visualization platform to acquire a display component corresponding to the large-screen image configuration parameters, and displaying the display component on the large screen.
2. The method of claim 1, wherein obtaining the preconfigured image recognition model comprises:
acquiring an image displayed on the large screen;
for each image, adopting a preset candidate frame to label the image so as to obtain components and component categories selected by the candidate frame in the image and coordinate information of the candidate frame;
respectively taking the components and component categories selected by the candidate frames in each image and the coordinate information of the candidate frames as data to be trained;
and training the initially configured image recognition model by adopting the data to be trained so as to obtain the pre-configured image recognition model.
3. The method of claim 2, further comprising:
grouping the data to be trained to obtain a first group of data sets to be trained and a second group of data sets to be trained;
then, the training the initially configured image recognition model by using the data to be trained to obtain the preconfigured image recognition model includes:
training the initially configured image recognition model by adopting the first group of data sets to be trained to obtain a trained image recognition model;
and verifying the trained image recognition model by adopting the second group of data sets to be trained, and if the verification is successful, taking the image recognition model which is successfully verified as the pre-configured image recognition model.
4. The method according to any one of claims 1 to 3, wherein the triggering a visualization platform to acquire a display component corresponding to the large-screen image configuration parameter and display the display component on the large screen, comprises:
triggering the visualization platform to perform format conversion processing on the large-screen image configuration parameters according to a pre-configured format so as to obtain converted large-screen image configuration parameters;
and calling a display component corresponding to the component position information, the component type, the component size and the component color type in the converted large-screen image configuration parameters, and displaying the display component on the large screen.
5. The method according to claim 4, wherein the retrieving a presentation component corresponding to component position information, component type, component size, and component color type in the converted large-screen image configuration parameters and displaying the presentation component on the large-screen image comprises:
calling a display component corresponding to component position information, component type, component size and component color type in the converted large-screen image configuration parameters;
and rendering the color matching of the display assembly according to a pre-stored UI color set in the visualization platform, and displaying the rendered display assembly on the large screen.
6. An apparatus for processing a large-screen image, comprising:
the acquisition module is used for acquiring a picture to be displayed on a large screen;
the processing module is used for identifying the picture to be displayed by adopting a pre-configured image identification model and acquiring large-screen image configuration parameters corresponding to the picture to be displayed;
the processing module is further used for triggering a visualization platform to acquire a display component corresponding to the large-screen image configuration parameter and displaying the display component on the large screen.
7. The apparatus of claim 6, wherein the processing module is specifically configured to:
acquiring an image displayed on the large screen;
for each image, adopting a preset candidate frame, and performing labeling processing on the image to obtain components and component categories selected by the candidate frame in the image and coordinate information of the candidate frame;
respectively taking the components and component categories selected by the candidate frames in each image and the coordinate information of the candidate frames as data to be trained;
and training the initially configured image recognition model by adopting the data to be trained so as to obtain the pre-configured image recognition model.
8. The apparatus according to claim 7, wherein the processing module is specifically configured to:
grouping the data to be trained to obtain a first group of data sets to be trained and a second group of data sets to be trained;
training the initially configured image recognition model by adopting the first group of data sets to be trained to obtain a trained image recognition model;
and verifying the trained image recognition model by adopting the second group of data sets to be trained, and if the verification is successful, taking the image recognition model which is successfully verified as the pre-configured image recognition model.
9. The apparatus according to any one of claims 6 to 8, wherein the processing module is specifically configured to:
triggering the visualization platform to perform format conversion processing on the large-screen image configuration parameters according to a pre-configured format so as to obtain converted large-screen image configuration parameters;
and calling a display component corresponding to the component position information, the component type, the component size and the component color type in the converted large-screen image configuration parameters, and displaying the display component on the large screen.
10. The apparatus of claim 9, wherein the processing module is specifically configured to:
calling a display component corresponding to component position information, component type, component size and component color type in the converted large-screen image configuration parameters;
and rendering the color matching of the display assembly according to a pre-stored UI color set in the visualization platform, and displaying the rendered display assembly on the large screen.
11. A server, characterized in that the server comprises: a processor, and a memory communicatively coupled to the processor;
the memory stores computer execution instructions;
the processor executes computer-executable instructions stored by the memory to implement the method of any of claims 1-5.
12. A computer-readable storage medium having computer-executable instructions stored thereon which, when executed by a processor, implement the method of any one of claims 1-5.
CN202211445705.5A 2022-11-18 2022-11-18 Large-screen image processing method, device and medium Pending CN115758005A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211445705.5A CN115758005A (en) 2022-11-18 2022-11-18 Large-screen image processing method, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211445705.5A CN115758005A (en) 2022-11-18 2022-11-18 Large-screen image processing method, device and medium

Publications (1)

Publication Number Publication Date
CN115758005A true CN115758005A (en) 2023-03-07

Family

ID=85373188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211445705.5A Pending CN115758005A (en) 2022-11-18 2022-11-18 Large-screen image processing method, device and medium

Country Status (1)

Country Link
CN (1) CN115758005A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116431153A (en) * 2023-06-15 2023-07-14 北京尽微致广信息技术有限公司 UI component screening method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116431153A (en) * 2023-06-15 2023-07-14 北京尽微致广信息技术有限公司 UI component screening method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108229485B (en) Method and apparatus for testing user interface
US20060085132A1 (en) Method and system to reduce false positives within an automated software-testing environment
KR102002024B1 (en) Method for processing labeling of object and object management server
CN110308346B (en) Automatic testing method and system for cockpit display system based on image recognition
JP5740634B2 (en) Automatic operation system and operation automation method
CN110765015A (en) Method for testing application to be tested and electronic equipment
CN115758005A (en) Large-screen image processing method, device and medium
CN110908922A (en) Application program testing method and device
CN111208998A (en) Method and device for automatically laying out data visualization large screen and storage medium
CN113034421A (en) Image detection method, device and storage medium
CN113900669A (en) BPMN-based edge equipment target detection process automation system and method
CN114416516A (en) Test case and test script generation method, system and medium based on screenshot
KR102011212B1 (en) Method for Collecting and Saving object that is used as training data of Neural network for Artificial Intelligence
CN113887442A (en) OCR training data generation method, device, equipment and medium
US8229224B2 (en) Hardware management based on image recognition
KR101576445B1 (en) image evalution automation method and apparatus using video signal
CN110232013B (en) Test method, test device, controller and medium
CN115631374A (en) Control operation method, control detection model training method, device and equipment
CN111158842A (en) Operation flow detection method, device and storage medium
US20220129423A1 (en) Method for annotating data, related apparatus and computer program product
CN116089277A (en) Neural network operator test and live broadcast application method and device, equipment and medium thereof
CN115422058A (en) Method and device for testing face recognition application
US11544179B2 (en) Source traceability-based impact analysis
CN114972500A (en) Checking method, marking method, system, device, terminal, equipment and medium
CN114896175A (en) Automatic test method, device, equipment and medium for report export function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination