CN116958300A - Image processing apparatus, image processing method, image capturing apparatus, and electronic apparatus - Google Patents

Image processing apparatus, image processing method, image capturing apparatus, and electronic apparatus Download PDF

Info

Publication number
CN116958300A
CN116958300A CN202310938170.3A CN202310938170A CN116958300A CN 116958300 A CN116958300 A CN 116958300A CN 202310938170 A CN202310938170 A CN 202310938170A CN 116958300 A CN116958300 A CN 116958300A
Authority
CN
China
Prior art keywords
image
view
renderer
displayed
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310938170.3A
Other languages
Chinese (zh)
Inventor
李佳隆
蒲小飞
孙洪福
刘通
王刘杨
王小童
袁光芯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Selis Phoenix Intelligent Innovation Technology Co ltd
Original Assignee
Chongqing Seres New Energy Automobile Design Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Seres New Energy Automobile Design Institute Co Ltd filed Critical Chongqing Seres New Energy Automobile Design Institute Co Ltd
Priority to CN202310938170.3A priority Critical patent/CN116958300A/en
Publication of CN116958300A publication Critical patent/CN116958300A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to the technical field of image processing, and provides an image processing device, an image processing method, an image pickup device and electronic equipment. The device comprises: the device comprises a view unit, a view model unit, a first renderer, a second renderer and a filter drawing unit; the view unit comprises an interface main class, wherein the interface main class comprises N display interface fragments, and each display interface fragment comprises a custom control; the view model unit comprises a view model base class, wherein the view model base class comprises M view models, and the M view models are bound with N display interface fragments; the first renderer receives layout information of the display interface fragments through the custom control; the method comprises the steps that a first renderer obtains an image to be displayed, renders the image to be displayed based on layout information, and obtains rendering data of the image to be displayed; and the filter drawing unit receives the rendering data and calls a second renderer to color the rendering data to obtain an image to be displayed. The device can improve the image processing efficiency.

Description

Image processing apparatus, image processing method, image capturing apparatus, and electronic apparatus
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing apparatus, an image processing method, an image capturing apparatus, and an electronic device.
Background
Conventional Camera software is typically implemented using a Model-View-Controller (MVC) architecture and developed with an application program interface Camera application program interface (Application Program Interface, API) provided by google. This architecture has the following problems:
the interface and logic of the control layer in the MVC architecture have the coupling problem, which is not beneficial to the later maintenance. With the increase of business logic, the codes of the control layer are continuously increased, and codes exceeding 3000 lines can appear in one control layer class, so that the burden on an developer is high;
the Camera API provided by google is completely used for development, and although the Camera stream can be controlled, the expansion is limited, and the special effect texture operations such as watermarking, beautifying and the like can not be realized.
Disclosure of Invention
In view of the above, the embodiments of the present application provide an image processing apparatus, an image processing method, an image capturing apparatus, and an electronic device, so as to solve the problem that in the prior art, a camera is not flexible and convenient when performing image processing.
In a first aspect of an embodiment of the present application, there is provided an image processing apparatus including:
the device comprises a view unit, a view model unit, a first renderer, a second renderer and a filter drawing unit;
the view unit comprises an interface main class, wherein the interface main class comprises N display interface fragments, and each display interface fragment comprises a custom control, wherein N is a positive integer;
the view model unit comprises a view model base class, the view model base class comprises M view models, the M view models are bound with N display interface fragments, and the view models are used for processing the representation logic and states of the display interface fragments, wherein M is a positive integer less than or equal to N;
the first renderer receives layout information of the display interface fragments through the custom control;
the method comprises the steps that a first renderer acquires an image to be displayed;
the first renderer renders the image to be displayed based on the layout information to obtain rendering data of the image to be displayed;
and the filter drawing unit receives the rendering data and calls a second renderer to color the rendering data to obtain an image to be displayed.
In a second aspect of the embodiment of the present application, there is provided an image processing method, including:
acquiring user input from a view unit;
determining a view model based on user input, and further determining a display interface segment corresponding to the view model;
the layout information of the display interface fragments is sent to a first renderer through a custom control;
the method comprises the steps that a first renderer acquires an image to be displayed;
the first renderer renders the image to be displayed based on the layout information to obtain rendering data of the image to be displayed;
and the filter drawing unit receives the rendering data and calls a second renderer to color the rendering data to obtain an image to be displayed.
In a third aspect of the embodiments of the present application, there is provided an image capturing apparatus including the above-described image processing apparatus.
In a fourth aspect of the embodiments of the present application, there is provided an electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the above method when executing the computer program.
Compared with the prior art, the embodiment of the application has the beneficial effects that: according to the image processing device provided by the embodiment of the application, the view unit, the view model unit, the first renderer, the second renderer and the filter drawing unit are arranged, the view unit provides the display interface fragment, the view model unit provides the view model corresponding to the view model, the custom control is communicated with the first renderer, the first renderer sends the rendering data obtained through processing to the filter drawing unit, the filter drawing unit calls the second renderer to color the rendering data and finally displays the rendering data, decoupling of the view and the view model is realized, complexity of processing codes by a developer is reduced, and image processing efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an application scenario according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a view unit according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a view model unit according to an embodiment of the present application.
Fig. 5 is a schematic view of a software framework of a camera including an image processing apparatus according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 7 is a flowchart of an improved image processing method according to an embodiment of the present application.
Fig. 8 is a schematic diagram of an image capturing apparatus according to an embodiment of the present application.
Fig. 9 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
An image processing apparatus and method according to an embodiment of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic view of an application scenario according to an embodiment of the present application. The application scenario may include terminal devices 1, 2 and 3, a server 4 and a network 5.
The terminal devices 1, 2 and 3 may be hardware or software. When the terminal devices 1, 2 and 3 are hardware, they may be various electronic devices having a display screen and supporting communication with the server 4, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like; when the terminal apparatuses 1, 2, and 3 are software, they can be installed in the electronic apparatus as described above. The terminal devices 1, 2 and 3 may be implemented as a plurality of software or software modules, or as a single software or software module, to which the embodiments of the present application are not limited. Further, the terminal devices 1, 2 and 3 may include thereon image pickup means, and the terminal devices 1, 2 and 3 may have camera software built in the system to process images photographed by the image pickup means. Further, various applications, such as a data processing application, an instant messaging tool, social platform software, a search class application, a shopping class application, etc., may be installed on the terminal devices 1, 2, and 3.
The server 4 may be a server that provides various services, for example, a background server that receives a request transmitted from a terminal device with which communication connection is established, and the background server may perform processing such as receiving and analyzing the request transmitted from the terminal device and generate a processing result. The server 4 may be a server, a server cluster formed by a plurality of servers, or a cloud computing service center, which is not limited in this embodiment of the present application.
The server 4 may be hardware or software. When the server 4 is hardware, it may be various electronic devices that provide various services to the terminal devices 1, 2, and 3. When the server 4 is software, it may be a plurality of software or software modules providing various services to the terminal devices 1, 2 and 3, or may be a single software or software module providing various services to the terminal devices 1, 2 and 3, to which the embodiment of the present application is not limited.
The network 5 may be a wired network using coaxial cable, twisted pair wire, and optical fiber connection, or may be a wireless network that can implement interconnection of various communication devices without wiring, for example, bluetooth (Bluetooth), near field communication (Near Field Communication, NFC), infrared (Infrared), etc., which is not limited in the embodiment of the present application.
The user can establish a communication connection with the server 4 via the network 5 through the terminal devices 1, 2, and 3 to receive or transmit information or the like. Specifically, when the user captures an image using the image capturing devices on the terminal apparatuses 1, 2, and 3, the acquired camera stream data may be processed locally on the terminal apparatuses 1, 2, and 3 or transmitted to the server 4 via the network 5 for processing. After the processing of the camera streaming data is completed, the processed image may be displayed on the interface of the terminal.
It should be noted that the specific types, numbers and combinations of the terminal devices 1, 2 and 3, the server 4 and the network 5 may be adjusted according to the actual requirements of the application scenario, which is not limited in the embodiment of the present application.
As mentioned above, conventional camera software is typically implemented using an MVC architecture, where a control layer interface and logic have a coupling problem, which is not beneficial for later maintenance. And traditional Camera software is usually developed by using a Camera API provided by google, and can realize the control of a Camera stream, but has limited expansibility and can not realize the special effect texture operations such as watermarking, beautifying and the like.
In view of this, the embodiment of the application provides an image processing apparatus, by setting a view unit, a view model unit, a first renderer, a second renderer and a filter drawing unit, the view unit provides a display interface segment, the view model unit provides a view model corresponding to the view model, a custom control is communicated with the first renderer, the first renderer sends the processed rendering data to the filter drawing unit, the filter drawing unit calls the second renderer to color the rendering data and finally displays the rendering data, so that decoupling of the view and the view model is realized, complexity of processing codes by a developer is reduced, and image processing efficiency is improved.
Fig. 2 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. As shown in fig. 2, the image processing apparatus includes:
a view unit 201, which includes an interface main class including N display interface segments, each display interface segment including a custom control.
Wherein N is a positive integer.
A view model unit 202 comprising a view model base class including M view models that are bound to N display interface segments for processing the representation logic and states of the display interface segments.
Wherein M is a positive integer less than or equal to N.
A first renderer 203 that receives layout information of the display interface section through a custom control; the first renderer acquires an image to be displayed; the first renderer renders the image to be displayed based on the layout information, and rendering data of the image to be displayed is obtained.
The filter drawing unit 204 receives the rendering data, and calls the second renderer to perform coloring processing on the rendering data, thereby obtaining an image to be displayed.
In the embodiment of the application, the image processing device may be a device in camera software. Unlike the MVC architecture used in conventional camera software, the image processing apparatus provided by the embodiments of the present application is implemented based on a Model-View Model (MVVM) architecture. The MVVM architecture includes a model layer, a view layer, and a view model layer. The model layer is used for packaging business logic and data of camera application, the view layer is used for packaging user interface and user interface logic, and the view model layer is used for packaging representation logic and states. The view layer may bind to the view model layer, execute some commands, and request corresponding actions from the view model layer. The view model layer may communicate with the model layer informing the model layer of updates in response to operation of the user interface. The image processing device provided by the embodiment of the application is mainly used for processing the image corresponding to the user interface, so that the image processing device is mainly focused on a view layer and a view model layer.
In the embodiment of the application, a view unit can be included in a view layer, the view unit includes an interface main class mainActivity, the mainActivity can include N display interface fragments, and each Fragment includes a custom control MyGLSurfaceView. Specifically, the MainActivity may include a home display interface segment HomeFragment and at least one image display interface segment photoshop. HomeFragment may be a default interface for the camera application, and PhotoFragment may be an interface that the camera application needs to present to the user in response to different operations of the camera application by the user.
In the embodiment of the present application, a view model unit may be included in a view model layer, where the view model unit includes a view model base class baseview model, where the baseview model includes M fragmentview models, and the M fragmentview models are bound to the N fragments. In one example, the BaseViewModel may include a HomeFragmentViewModel that binds with HomeFragmentViewModel in MainActivity. In yet another example, the BaseViewModel may further include at least one PhotoFragmentViewModel, each PhotoFragmentViewModel being bound to one or more PhotoFragments in MainActivity. That is, one fragmentview model may bind one or more different fragments.
In embodiments of the present application, the fragmentview model may be used to handle the presentation logic and state of fragments. The representation logic and state of the Fragment may be other business logic and state data related to vision in the view. By processing the presentation logic and state of fragments at the fragmentview model, the cohesiveness of the image processing apparatus can be improved.
In the embodiment of the application, the first renderer MyRender can receive Layout information of fragments through the custom control MyGLSurfaceView. Further, the MyRender may further obtain an image to be displayed, and render the image to be displayed based on the obtained layout information, to obtain rendering data of the image to be displayed.
Specifically, myRender may be integrated in MyGLSurfaceView, receiving Fragment's Layout information. On the other hand, myRender may also obtain images to be displayed, including homepage images of camera software, through a specified API, as well as other images that need to be displayed on the user interface in response to user operations. The MyRender perceives the change and rendering of the image through a designated API, converts an image stream acquired from the image pickup device into frame-by-frame image data, that is, texture, based on the change and rendering, and then transmits the texture identification and matrix data to the filter rendering unit FilterDraw.
In the embodiment of the application, the FilterDraw can receive the rendering data obtained by Myrender processing and call the second renderer OpenGL to perform coloring processing on the rendering data to obtain the image to be displayed.
According to the technical scheme provided by the embodiment of the application, the view unit, the view model unit, the first renderer, the second renderer and the filter drawing unit are arranged, the view unit provides the display interface fragment, the view model unit provides the view model corresponding to the view model, the custom control is communicated with the first renderer, the first renderer sends the rendering data obtained through processing to the filter drawing unit, the filter drawing unit calls the second renderer to color the rendering data and finally displays the rendering data, decoupling of the view and the view model is realized, the complexity of processing codes by a developer is reduced, and the image processing efficiency is improved.
Fig. 3 is a schematic structural diagram of a view unit according to an embodiment of the present application. As shown in FIG. 3, the view unit includes an interface base class BaseActivity and an interface main class MainActivity, wherein MainActivity is a sub-class of BaseActivity, baseActivity is packaged with a basic method for displaying fragments, and MainActivity is packaged with a specific method for displaying fragments in fragments. In one example, some common methods may be encapsulated in BaseActivity, such as binding ViewModel, encapsulating the jump method of navigator tool library Navigation, etc. The MainActivity may be responsible for implementing methods not implemented in BaseActivity and displaying specific interfaces.
Further, the mainActivity may include a navigator tool library Navigation, where the Navigation is encapsulated in the mainActivity, so that the mainActivity may use a configuration file in the Navigation to implement a jump between different fragments. In one example, the configuration file in Navigation may be navi_graf.
Furthermore, the MainActivity may further include one or more display interface fragments, each of which is configured with a custom control MyGLSurfaceView. The myglsurfacview is an implementation class of glsurfacview, which is a control provided by the Android (Android) authority. The MyGLSurfaceView integrates the functions of OpenGL and SurfaceView, so that the image can be rendered through the OpenGL, and the display of SurfaceView can be realized.
By adopting the technical scheme of the embodiment of the application, the processing efficiency can be improved by encapsulating the common method in the BaseActivity and encapsulating the specific method in the MainActivity in the view unit; meanwhile, the MyGLSurfaceView controls are respectively configured in each Fragment of the MainActivity, so that the rendering and the display of the image can be conveniently and rapidly realized, and the processing efficiency is further improved.
Fig. 4 is a schematic structural diagram of a view model unit according to an embodiment of the present application. As shown in FIG. 4, the view model element includes a view model base class BaseViewModel, and a view model ViewModel. The BaseViewModel is a base class of ViewModel in which a common method of ViewModel is encapsulated. The ViewModel is responsible for processing code logic of fragments, and time-consuming tasks such as code logic processing, thread opening operation and the like can be realized in the ViewModel.
In the embodiment of the application, the ViewModel is used for fully decoupling the codes of the fragments, and the ViewModel is used for realizing time-consuming tasks such as code logic processing, thread opening operation and the like by placing the fragments in the class, wherein the fragments are only responsible for processing interfaces, and the fragments can be greatly decoupled by dividing the labor and layering, so that the codes in the fragments are reduced in multiple.
In the embodiment of the application, the MyRender can acquire the image to be displayed by the following modes: responding to the image to be displayed as a fixed page image, and acquiring the pre-stored fixed page image by the MyRender as the image to be displayed; and responding to the image to be displayed as the image shot by the camera device in real time, acquiring the data stream of the camera device by the MyRender through a preset application program interface, and converting the data stream into the image to be displayed.
That is, the MyRender may first determine whether the image to be displayed is a FixedPage image or an image captured by the camera in real time. For example, if the camera application is just opened, or the user performs a "return home page" operation on the interface of the camera application, the image to be displayed is a home page interface image of the camera application, and at this time, the MyRender may directly obtain the pre-stored image of the shown FixedPage from the memory or the cache as the image to be displayed. If the user performs a photographing operation on the interface of the camera application, the image to be displayed is an image photographed by the camera device in real time, and at this time, the MyRender can acquire a data stream of the camera device through a preset application program interface, and convert the data stream into the image to be displayed. The preset application program interface may be a Camera2 application program interface.
In the embodiment of the application, the second renderer OpenGL can be called to carry out coloring treatment on the rendering data in the following manner to obtain the image to be displayed: invoking OpenGL to perform vertex coloring treatment and fragment coloring treatment on the rendering data to obtain an image to be displayed; or calling OpenGL to perform vertex coloring processing and fragment coloring processing on the rendering data, and adding special effect processing during fragment coloring processing to obtain an image to be displayed.
That is, filterDraw may invoke OpenGL to perform vertex shading and fragment shading on rendering data, resulting in an image to be displayed. Furthermore, when the OpenGL is called to perform fragment coloring processing on the rendering data, special effects processing, such as adding a filter, adding a large-eye special effect, and the like, can be added, so that an image to be displayed is obtained. The vertex shader may be camera_vertex.glsl, and the fragment shader may be camera_fragment.glsl, where camera_vertex.glsl is responsible for determining the position coordinates displayed, and camera_fragment.glsl is responsible for displaying the position coordinates to the interface.
As described above, the image processing apparatus provided by the embodiment of the present application is implemented based on the MVVM architecture, and therefore, the image processing apparatus may further include a model unit. Further, the Fragment's Layout information may be determined as follows: the view unit receives user input and sends the user input to the view model unit; the view model unit transmits user input to the model unit and receives data and commands to be bound from the model unit; the view model unit determines a view model based on the data and the command to be bound; the view unit determines a corresponding display interface segment based on the view model, and further determines layout information of the display interface segment.
That is, the view model unit may receive user input at the camera application interface and send the user input to the view model unit. The view model unit sends user input to the model unit. After receiving the user input, the model unit determines the data and the command to be bound in response to the operation input by the user, and sends the data and the command to be bound to the view model unit. The view model unit determines the view model based on the received data and commands that need to be bound. Because ViewModel and fragments are bound, the view unit can determine the corresponding display fragments based on ViewModel, and further determine the Layout information of the fragments.
In the embodiment of the application, the first renderer and the second renderer both use an open graphics library OpenGL to realize image rendering and coloring.
Fig. 5 is a schematic view of a software framework of a camera including an image processing apparatus according to an embodiment of the present application. As shown in fig. 5, the camera is implemented based on an MVVM architecture, and the MVVM is divided into a model layer, i.e., M layer, a view layer, i.e., V layer, and a view model layer, i.e., VM layer. The M layer is used for packaging business logic and data of the application, and the V layer is used for packaging a user interface and user interface logic; the VM layer is used to encapsulate the representation logic and states.
View may be bound to the ViewModel and then execute some commands requesting an action from it. In turn, the ViewModel may also communicate with the Model to tell it to update in response to the user interface. This makes it very easy to build a user interface for an application. The easier it is to attach an interface to an application, the easier it is for the designer to create a beautiful interface. At the same time, as the user interface and functionality are more and more loosely coupled, functionality becomes more and more testable.
Further, the software architecture of the camera is developed and realized based on the Kotlen language. The conventional Android development uses the Java language, java is released before 20 years, and is compatible with the previous Java version although the Java is updated, so that the use of the Java is inflexible, and some limitations exist. The kotlen language is an Android official recommended voice that leverages language design expertise accumulated in the past 20 years since the birth of Java. Second, it fills all the urgent modern functions that Java mobile developers have been craving, programming language functions that have proven their effectiveness through large projects. Development using kotlen speech has the following advantages: the development tool is efficient and familiar, and aims to improve the working efficiency of developers; kotlen has a good compiler; seamless integration with existing infrastructure, since kotlen is compatible with all Java frameworks and libraries, and can be easily integrated with Marven and Gradle build systems; enhanced runtime performance can be provided.
Furthermore, navigation is adopted in the software architecture of the camera to manage fragments. The framework mode of Navigation is contained in the Android Jetpack toolkit of Google, is a new framework mode for managing fragments, can easily realize the scene that one Activity manages a plurality of Fragment display pages through Navigation, and is more convenient compared with the traditional operation mode of Fragment management by FragmentManager.
Meanwhile, camera2 is used in the software architecture of the Camera to take the stream, and the API of Camera1 provided by google is not used to take the stream of data because the method provided by Camera1 is limited, and when a plurality of cameras take the stream at the same time, camera1 can not well solve the problem of multi-stream. And the Camera2 provided by google recently processes the streaming problem by using the idea of a pipeline, and can flexibly process the related problems of multiple cameras. In addition, the software architecture of the camera uses OpenGL, which is a tool library for 2D and 3D vector graphic rendering, a custom renderer is created through a GLSurface control, in the renderer, frame data of an audio and video stream are converted into textures, the textures are fed to the OpenGL, and finally, specific display effects such as large eyes, filters, special effects, beauty and the like are realized.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. As shown in fig. 6, the image processing apparatus includes a section included in the interface BaseActivity, a section included in the view model baseview model, a renderer MyRender including section, and a rendering display OpenGl including section.
The interface base activity is a base class of an interface, and encapsulates some common methods, such as binding view model, navigation tool library Navigation skip method, and the like. The interface main class MainActivity is a subclass of BaseActivity, is responsible for realizing unrealized methods in BaseActivity, and displays specific interfaces.
The interface main class MainActivity is integrated with a Navigation tool library, navigation manages the skip among fragments of each display interface, and establishes the skip among a plurality of fragments through a configuration file of navi_graf.xml.
The home page display interface fragment HomeFragment and the photo display interface fragment PhotoFragment are interface fragments in the MainActivity and are responsible for displaying specific business interfaces.
The view model base class BaseViewModel is a base class of a ViewModel, encapsulates a common method of the ViewModel, and the ViewModel is responsible for processing code logic of fragments, such as time-consuming tasks of code logic processing, thread opening operation and the like, and is realized in the class.
The HomeFragmentViewModel is used, codes of HomeFragment can be fully decoupled, the time-consuming tasks of code logic processing or thread opening operation are put into the class to be realized through the HomeFragmentViewModel, the HomeFragment is only responsible for processing interfaces, and the division and layering can greatly decouple the HomeFragment, so that the times of the codes in the HomeFragment are reduced.
Custom controls MyGLSurfaceView exist in Homefragment and Photofragment. MyGLSurfaceView is an implementation class of GLSurfaceView. The MyGLSurfaceView control is a control provided by an Android official, integrates the functions of OpenGL and SurfaceView, and can render images through OpenGL and display SurfaceView.
The MyGLSurfaceView has integrated therein a first renderer MyRender. The MyRender is responsible for rendering an image, perceiving the change and drawing of the image through a designated interface, and needs to have the operation of Camera2 streaming, convert the image stream into image data, namely texture, frame by frame, and then transfer the texture identifier and matrix data into the FilterDraw.
Myglsurfacview is closely related to FilterDraw, which class may be dedicated to be responsible for integrating OpenGL, preparing parameters needed by OpenGL.
OpenGl is a graphics card language, and what effect is achieved finally is achieved through OpenGL, where OpenGL is divided into vertex shaders and fragment shaders. The vertex shader may be camera_vertex.glsl, and the fragment shader may be camera_fragment.glsl. Camera_vertex.glsl is responsible for determining the displayed position coordinates; the camera_fragment is responsible for displaying to the screen, and usually, both the large eye and the filter can be implemented in a fragment shader.
Any combination of the above optional solutions may be adopted to form an optional embodiment of the present application, which is not described herein.
The following are embodiments of the method of the present application, which may be performed by embodiments of the apparatus of the present application. For details not disclosed in the method embodiments of the present application, please refer to the device embodiments of the present application.
Fig. 7 is a flowchart of an improved image processing method according to an embodiment of the present application. The method shown in fig. 7 may be performed by the image processing apparatus shown in fig. 2. As shown in fig. 7, the image processing method includes the steps of:
in step S701, the self-view unit acquires user input.
In step S702, a view model is determined based on user input, and a display interface segment corresponding to the view model is determined.
In step S703, layout information of the display interface segment is sent to the first renderer through the custom control.
In step S704, the first renderer acquires an image to be displayed.
In step S705, the first renderer renders the image to be displayed based on the layout information, and obtains rendering data of the image to be displayed.
In step S706, the filter drawing unit receives the rendering data, and invokes the second renderer to perform coloring processing on the rendering data, thereby obtaining an image to be displayed.
According to the technical scheme provided by the embodiment of the application, the view unit, the view model unit, the first renderer, the second renderer and the filter drawing unit are arranged, the view unit provides the display interface fragment, the view model unit provides the view model corresponding to the view model, the custom control is communicated with the first renderer, the first renderer sends the rendering data obtained through processing to the filter drawing unit, the filter drawing unit calls the second renderer to color the rendering data and finally displays the rendering data, decoupling of the view and the view model is realized, the complexity of processing codes by a developer is reduced, and the image processing efficiency is improved.
In an embodiment of the present application, the obtaining, by the first renderer, an image to be displayed may include: responding to the image to be displayed as a fixed page image, and acquiring the pre-stored fixed page image as the image to be displayed by a first renderer; and responding to the image to be displayed as the image shot by the camera device in real time, acquiring the data stream of the camera device by the first renderer through a preset application program interface, and converting the data stream into the image to be displayed.
In the embodiment of the present application, invoking a second renderer to perform coloring processing on rendering data to obtain an image to be displayed may include: invoking a second renderer to perform vertex coloring and fragment coloring on the rendering data to obtain an image to be displayed; or calling a second renderer to perform vertex coloring processing and fragment coloring processing on the rendering data, and adding special effect processing during fragment coloring processing to obtain an image to be displayed.
In the embodiment of the application, the layout information of the display interface segment can be determined in the following manner: the view unit receives user input and sends the user input to the view model unit; the view model unit sends the user input to the model unit and receives data and commands to be bound from the model unit; the view model unit determines a view model based on the data and the command to be bound; and the view unit determines a corresponding display interface segment based on the view model, and further determines layout information of the display interface segment.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Fig. 8 is a schematic diagram of an image capturing apparatus according to an embodiment of the present application. As shown in fig. 8, the image pickup apparatus may include an image processing apparatus provided in the embodiment shown in fig. 2 or fig. 6.
Fig. 9 is a schematic diagram of an electronic device according to an embodiment of the present application. As shown in fig. 9, the electronic apparatus 9 of this embodiment includes: a processor 901, a memory 902 and a computer program 903 stored in the memory 902 and executable on the processor 901. The steps of the various method embodiments described above are implemented when the processor 901 executes the computer program 903. Alternatively, the processor 901 performs the functions of the modules/units in the above-described apparatus embodiments when executing the computer program 903.
The electronic device 9 may be a desktop computer, a notebook computer, a palm computer, a cloud server, or the like. The electronic device 9 may include, but is not limited to, a processor 901 and a memory 902. It will be appreciated by those skilled in the art that fig. 9 is merely an example of the electronic device 9 and is not limiting of the electronic device 9 and may include more or fewer components than shown, or different components.
The processor 901 may be a central processing unit (Central Processing Unit, CPU) or other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
The memory 902 may be an internal storage unit of the electronic device, for example, a hard disk or a memory of the electronic device 9. The memory 902 may also be an external storage device of the electronic device 9, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the electronic device 9. The memory 902 may also include both internal and external memory units of the electronic device 9. The memory 902 is used to store computer programs and other programs and data required by the electronic device.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. The computer program may comprise computer program code, which may be in source code form, object code form, executable file or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. An image processing apparatus, characterized in that the image processing apparatus comprises:
the device comprises a view unit, a view model unit, a first renderer, a second renderer and a filter drawing unit;
the view unit comprises an interface main class, wherein the interface main class comprises N display interface fragments, and each display interface fragment comprises a custom control, wherein N is a positive integer;
the view model unit comprises a view model base class, the view model base class comprises M view models, the M view models are bound with the N display interface fragments, and the view models are used for processing the representation logic and the state of the display interface fragments, wherein M is a positive integer less than or equal to N;
the first renderer receives layout information of the display interface segment through the custom control;
the first renderer acquires an image to be displayed;
the first renderer renders the image to be displayed based on the layout information, and rendering data of the image to be displayed is obtained;
and the filter drawing unit receives the rendering data and calls the second renderer to perform coloring processing on the rendering data to obtain the image to be displayed.
2. The apparatus of claim 1, wherein the view unit further comprises:
an interface base class;
the interface main class is a subclass of the interface base class, a basic method for displaying the display interface fragments is packaged in the interface base class, and a specific method for displaying each display interface fragment in the display interface fragments is packaged in the interface main class.
3. The apparatus of claim 1, wherein the view unit further comprises:
a navigator tool library;
the navigator tool library is packaged in the interface main class, and the interface main class uses configuration files in the navigator tool library to realize the jump among different display interface fragments.
4. The apparatus of claim 1, wherein the first renderer acquires an image to be displayed, comprising:
responding to the image to be displayed as a fixed page image, and acquiring the prestored fixed page image as the image to be displayed by the first renderer;
and responding to the image to be displayed as the image shot by the camera device in real time, the first renderer acquires the data stream of the camera device through a preset application program interface, and converts the data stream into the image to be displayed.
5. The apparatus of claim 1, wherein the invoking the second renderer to render the rendering data to obtain the image to be displayed comprises:
invoking the second renderer to perform vertex coloring and fragment coloring on the rendering data to obtain the image to be displayed; or alternatively
And calling the second renderer to perform vertex coloring processing and fragment coloring processing on the rendering data, and adding special effect processing when the fragment coloring processing is performed, so as to obtain the image to be displayed.
6. The apparatus of claim 2, further comprising a model unit configured to determine layout information of the display interface segment by:
the view unit receives user input and sends the user input to the view model unit;
the view model unit sends the user input to the model unit and receives data and commands to be bound from the model unit;
the view model unit determines a view model based on the data and the command to be bound;
and the view unit determines a corresponding display interface segment based on the view model, and further determines layout information of the display interface segment.
7. The apparatus of any one of claims 1 to 6, wherein the first and second renderers implement image rendering and shading using an open graphics library OpenGL.
8. An image processing method, the method comprising:
acquiring user input from a view unit;
determining a view model based on the user input, and further determining a display interface segment corresponding to the view model;
the layout information of the display interface segment is sent to a first renderer through a custom control;
the first renderer acquires an image to be displayed;
the first renderer renders the image to be displayed based on the layout information, and rendering data of the image to be displayed is obtained;
and the filter drawing unit receives the rendering data and calls a second renderer to color the rendering data to obtain the image to be displayed.
9. An image pickup apparatus, characterized in that the image pickup apparatus includes the image processing apparatus according to any one of claims 1 to 7.
10. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method of claim 8 when the computer program is executed by the processor.
CN202310938170.3A 2023-07-27 2023-07-27 Image processing apparatus, image processing method, image capturing apparatus, and electronic apparatus Pending CN116958300A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310938170.3A CN116958300A (en) 2023-07-27 2023-07-27 Image processing apparatus, image processing method, image capturing apparatus, and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310938170.3A CN116958300A (en) 2023-07-27 2023-07-27 Image processing apparatus, image processing method, image capturing apparatus, and electronic apparatus

Publications (1)

Publication Number Publication Date
CN116958300A true CN116958300A (en) 2023-10-27

Family

ID=88451012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310938170.3A Pending CN116958300A (en) 2023-07-27 2023-07-27 Image processing apparatus, image processing method, image capturing apparatus, and electronic apparatus

Country Status (1)

Country Link
CN (1) CN116958300A (en)

Similar Documents

Publication Publication Date Title
WO2018014766A1 (en) Generation method and apparatus and generation system for augmented reality module, and storage medium
CN112933599B (en) Three-dimensional model rendering method, device, equipment and storage medium
US9928637B1 (en) Managing rendering targets for graphics processing units
CN107861754B (en) Data packaging method, data processing method, data packaging device, data processing device and electronic equipment
JP2013508869A (en) Application image display method and apparatus
CN109558507A (en) Threedimensional model rendering method and system
CN111414225A (en) Three-dimensional model remote display method, first terminal, electronic device and storage medium
US20240144625A1 (en) Data processing method and apparatus, and electronic device and storage medium
CN112307403A (en) Page rendering method, device, storage medium and terminal
CN111447504A (en) Three-dimensional video processing method and device, readable storage medium and electronic equipment
US11195248B2 (en) Method and apparatus for processing pixel data of a video frame
CN116958300A (en) Image processing apparatus, image processing method, image capturing apparatus, and electronic apparatus
CN113259651B (en) Stereoscopic display method, apparatus, medium, and computer program product
CN108876925B (en) Virtual reality scene processing method and device
CN115964331A (en) Data access method, device and equipment
WO2023048983A1 (en) Methods and apparatus to synthesize six degree-of-freedom views from sparse rgb-depth inputs
CN115422058A (en) Method and device for testing face recognition application
CN117065357A (en) Media data processing method, device, computer equipment and storage medium
CN114741193A (en) Scene rendering method and device, computer readable medium and electronic equipment
CN113837918A (en) Method and device for realizing rendering isolation by multiple processes
CN107942692A (en) Method for information display and device
CN113836455A (en) Special effect rendering method, device, equipment, storage medium and computer program product
CN113936089A (en) Interface rendering method and device, storage medium and electronic equipment
CN112615928B (en) Data processing method, device and storage medium
CN111210381B (en) Data processing method, device, terminal equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240116

Address after: No. 13 Xingxiang Road, Zengjia Town, High tech Zone, Shapingba District, Chongqing, 400039

Applicant after: Chongqing Selis Phoenix Intelligent Innovation Technology Co.,Ltd.

Address before: 401120 No. 618 Liangjiang Avenue, Longxing Town, Yubei District, Chongqing City

Applicant before: Chongqing Celes New Energy Automobile Design Institute Co.,Ltd.