CN108431867B - Data processing method and terminal - Google Patents

Data processing method and terminal Download PDF

Info

Publication number
CN108431867B
CN108431867B CN201780005199.9A CN201780005199A CN108431867B CN 108431867 B CN108431867 B CN 108431867B CN 201780005199 A CN201780005199 A CN 201780005199A CN 108431867 B CN108431867 B CN 108431867B
Authority
CN
China
Prior art keywords
image
data frame
photographing data
photographing
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780005199.9A
Other languages
Chinese (zh)
Other versions
CN108431867A (en
Inventor
韩军辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN108431867A publication Critical patent/CN108431867A/en
Application granted granted Critical
Publication of CN108431867B publication Critical patent/CN108431867B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention provides a data processing method and device, relates to the field of image processing, and solves the problems of poor quality and poor display effect of a displayed image in the real-time photo shooting process in the prior art. The method comprises the following steps: acquiring a data stream of an image for photographing, wherein the data stream of the image for photographing comprises at least two photographing data frames and at least two control data frames; extracting the image characteristics of each photographing data frame in the at least two photographing data frames; selecting a target photographing data frame from the at least two photographing data frames according to the image characteristics of each photographing data frame; and processing the at least two control data frames and the target photographing data frame to generate a display image.

Description

Data processing method and terminal
The present application claims priority from the chinese patent application entitled "a method and apparatus for photographic processing" filed by the chinese patent office on 30/11/2016, application number 201611082252.9, the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of image processing, and in particular, to a data processing method and a terminal.
Background
Currently, as shown in fig. 1, when a user uses a terminal such as a mobile phone or a camera to take a real-time photo, a camera sensor in the terminal captures a target and converts the target into RAW data, the RAW data can be converted into a preview data stream, a photo data stream and control information, and the RAW data can be cached in a data buffer. The RAW data refers to the most original captured data stream about the image, and the preview data stream refers to the continuous YUV data frames processed by the ISP unit; the photographing data stream refers to a data stream for generating a display image, and may include a plurality of photographing data frames; the control information refers to information including an identification code or a time stamp of the photographing data frame, and may include a plurality of control data frames, each of which may include certain control information. Then, the photographed data stream is transmitted to an Image Signal Processing (ISP) unit responsible for Image Processing, the ISP unit performs comparison and matching on the control information and the photographed data stream, and encodes the photographed data frame which is successfully matched first to obtain a Jpeg or other compressed encoding format picture, i.e. a final display Image, which is displayed on a display panel, for example, the Image with the Image format Jpeg shown in fig. 1.
In the method, when the ISP unit compares and matches the control information with the photographing data stream, matching is performed according to the time stamp or the identification code corresponding to each photographing data frame and the control information, and a final display image is generated according to the photographing data frame successfully matched for the first time. However, when the RAW data is acquired, there is a certain difference or noise in the acquired RAW data due to factors such as jitter or environmental change, so that when the Jpeg image is generated by performing contrast matching according to the above method, there are problems of poor image quality and poor display effect of the image.
Disclosure of Invention
The embodiment of the invention provides a data processing method and a terminal, and solves the problems of poor quality and poor display effect of a displayed image in the real-time photo shooting process in the prior art.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
in a first aspect, a data processing method is provided, which includes: acquiring a data stream of an image for photographing, wherein the data stream of the image for photographing comprises at least two photographing data frames and at least two control data frames; extracting the image characteristics of each photographing data frame in at least two photographing data frames; selecting a target photographing data frame from at least two photographing data frames according to the image characteristics of each photographing data frame; and processing the at least two control data frames and the target photographing data frame to generate a display image. In the embodiment of the invention, after the terminal acquires the data stream of the image for photographing, the terminal can extract the image characteristics of each photographing data frame in at least two photographing data frames included in the data stream, so that the target photographing data frame is selected from the at least two photographing data frames based on the image characteristics, and then the at least two control data frames and the target photographing data frame are processed to generate the display image, so that the quality of the display image in the real-time photograph photographing process can be ensured to be better, the display effect of the display image is improved, and the user experience is improved.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the image feature includes at least one of the following features: color features, texture features, shape features, spatial relationship features. In the above possible implementation manner, several possible image features in the photographing data frames are provided, so that the image features are extracted, and when the target photographing data frame is selected according to the extracted image features, the photographing data frame with a better effect can be selected from at least two photographing data frames to generate the display image.
With reference to the first aspect or the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the selecting a target photographed data frame from at least two photographed data frames according to an image feature of each photographed data frame includes: judging whether the image characteristics of each photographed data frame meet preset conditions or not; selecting a photographing data frame meeting a preset condition from at least two photographing data frames as a target photographing data frame; if the image feature comprises a feature, the preset condition comprises that the feature parameter of the feature is greater than or equal to a first preset threshold; or, if the image feature includes a feature, the preset condition includes that the feature parameter of the feature is less than or equal to a first preset threshold; or, if the image characteristic includes at least two features, the preset condition includes that the sum of the products of the feature parameters of the at least two features and the corresponding preset weights is greater than or equal to a second preset threshold; or, if the image characteristic includes at least two features, the preset condition includes that the sum of the products of the feature parameters of the at least two features and the corresponding preset weights is less than or equal to a second preset threshold. In the possible implementation manners, several possible preset conditions are provided, so that the terminal selects the target photographing data frame with a better effect from the at least two photographing data frames according to the preset conditions, so as to improve the quality of the generated display image and further improve the display effect of the display image.
With reference to the first aspect or the first possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the data stream of the image for photographing further includes at least two pieces of sensor information, each piece of sensor information corresponds to one photographing data frame, and a target photographing data frame is selected from the at least two photographing data frames according to an image feature of each photographing data frame, including: and selecting a target photographing data frame from at least two photographing data frames according to the image characteristics of each photographing data frame and the sensor information corresponding to each photographing data frame. In the possible implementation manner, the target photographing data frame is selected by combining the image characteristics of each photographing data frame and the sensor information, so that the problem of poor display effect of the generated display image caused by problems such as jitter or angle can be avoided, and the user experience is further improved.
In a second aspect, a terminal is provided, which includes: an acquisition unit configured to acquire a data stream of an image for photographing, the data stream of the image for photographing including at least two photographing data frames and at least two control data frames; the extraction unit is used for extracting the image characteristics of each photographing data frame in at least two photographing data frames; the selection unit is used for selecting a target photographing data frame from at least two photographing data frames according to the image characteristics of each photographing data frame; and the processing unit is used for processing the at least two control data frames and the target photographing data frame to generate a display image.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the image feature includes at least one of the following features: color features, texture features, shape features, spatial relationship features.
With reference to the second aspect or the first possible implementation manner of the second aspect, in a second possible implementation manner of the second aspect, the selecting unit is specifically configured to: judging whether the image characteristics of each photographed data frame meet preset conditions or not; selecting a photographing data frame meeting a preset condition from at least two photographing data frames as a target photographing data frame; if the image feature comprises a feature, the preset condition comprises that the feature parameter of the feature is greater than or equal to a first preset threshold; or, if the image feature includes a feature, the preset condition includes that the feature parameter of the feature is less than or equal to a first preset threshold; or, if the image characteristic includes at least two features, the preset condition includes that the sum of the products of the feature parameters of the at least two features and the corresponding preset weights is greater than or equal to a second preset threshold; if the image characteristic comprises at least two features, the preset condition comprises that the sum of the products of the feature parameters of the at least two features and the corresponding preset weights is less than or equal to a second preset threshold value.
With reference to the second aspect or the first possible implementation manner of the second aspect, in a third possible implementation manner of the second aspect, the data stream of the image for photographing further includes at least two pieces of sensor information, each piece of sensor information corresponds to one photographing data frame, and the selecting unit is specifically configured to: and selecting a target photographing data frame from at least two photographing data frames according to the image characteristics of each photographing data frame and the sensor information corresponding to each photographing data frame.
In a third aspect, a terminal is provided, where the terminal includes a processor and a memory, where the memory stores codes and data, and the processor executes the codes in the memory, so that the processor executes the data processing method provided in any one of the foregoing first aspect to third possible implementation manners of the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, where computer-executable instructions are stored in the computer-readable storage medium, and when at least one processor of a device executes the computer-executable instructions, the device executes the data processing method provided in any one of the first aspect to the third possible implementation manner of the first aspect.
In a fifth aspect, a computer program product is provided, the computer program product comprising computer executable instructions, the computer executable instructions being stored in a computer readable storage medium; the at least one processor of the apparatus may read the computer executable instructions from the computer readable storage medium, and the execution of the computer executable instructions by the at least one processor causes the apparatus to implement the data processing method provided by any one of the third possible implementation manners of the first aspect to the first aspect.
It is understood that the terminal, the computer storage medium, or the computer program product of any of the data processing methods provided above are all configured to execute the corresponding methods provided above, and therefore, the beneficial effects achieved by the terminal, the computer storage medium, or the computer program product of any of the data processing methods provided above can refer to the beneficial effects in the corresponding methods provided above, and are not described herein again.
Drawings
FIG. 1 is a schematic diagram of a data stream processing method for photographed images;
fig. 2 is a structural diagram of a terminal according to an embodiment of the present invention;
fig. 3 is a flowchart of a data processing method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a data stream processing method for a photographed image according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of another terminal according to an embodiment of the present invention.
Detailed Description
The data processing method provided by the embodiment of the invention can be executed by a terminal, and the terminal can be a mobile phone, a tablet Computer, a notebook Computer, a super-mobile Personal Computer (UMPC), a netbook, a Personal Digital Assistant (PDA), a video camera, a camera and the like. The embodiment of the present invention is described by taking a terminal as an example of a mobile phone, and fig. 2 is a block diagram illustrating a part of a structure of a mobile phone related to each embodiment of the present invention.
It will be understood by those skilled in the art that the structure shown in fig. 2 is only an illustration and is not intended to limit the structure of the terminal. For example, the terminal may also include more or fewer components than shown in FIG. 2, or have a different configuration than shown in FIG. 2.
As shown in fig. 2, the terminal 20 includes: memory 201, processor 202, sensor component 203, multimedia component 204, power component 205, input/output interface 206.
The various components of the terminal 20 will now be described in detail with reference to fig. 1:
memory 201 may be used to store data, software programs, and modules; the system mainly comprises a storage program area and a storage data area, wherein the storage program area can store an operating system, application programs (such as a sound playing function, an image playing function and the like) required by at least one function and the like; the storage data area may store data (such as audio data, image data, a phonebook, etc.) created according to the use of the terminal 20, and the like. Further, the memory 201 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 202 is a control center of the terminal 20, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the terminal 20 and processes data by running or executing software programs and/or modules stored in the memory 201 and calling data stored in the memory 201, thereby performing overall monitoring of the terminal 20. Alternatively, processor 202 may include one or more processing units; preferably, the processor 202 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 202.
The sensor component 203 includes one or more sensors for providing various aspects of status assessment for the terminal 20. The sensor assembly 203 may include, among other things, a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. Further, the sensor assembly 203 may further include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor or a temperature sensor, and acceleration/deceleration, orientation, on/off state of the terminal 20, relative positioning of the components, or temperature change of the terminal 20, etc. may be detected by the sensor assembly 203.
The multimedia component 204 provides a screen of an output interface between the terminal 20 and the user, for example, a liquid crystal display, a touch panel. When the screen is a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In addition, the multimedia component 204 may include a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the terminal 20 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The power components 205 are used to provide power to the various components of the terminal 20, and the power components 205 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the terminal 20. Input/output interface 206 provides an interface between processor 202 and peripheral interface modules, such as a keyboard, mouse, etc.
Although not shown, the terminal 20 may further include an audio component, a communication component, and the like, for example, the audio component includes a microphone, and the communication component includes a WiFi (wireless fidelity) module, a bluetooth module, and the like, which are not described herein again.
Fig. 3 is a flowchart of a data processing method according to an embodiment of the present invention, which is applied to the terminal shown in fig. 2, and referring to fig. 3, the method includes the following steps.
Step 301: a data stream of an image for taking a picture is acquired, the data stream of the image for taking a picture including at least two data frames for taking a picture and at least two control data frames.
When the terminal takes a picture, the terminal may acquire a data stream of an image for taking a picture, where the data stream is RAW data and may include a preview data stream, a taking picture data stream, and control information. The data stream refers to the most raw image data about the image for taking a picture captured by the camera sensor of the terminal, and the data stream can be cached in the data cache region. The preview data stream refers to processed successive YUV data frames of RAW data. The photographed data frame refers to information for generating a display image, and the control information refers to information including an identification code or a time stamp of the photographed data frame. In the embodiment of the present invention, the photographing data stream includes at least two photographing data frames, that is, the photographing data stream may include two or more photographing data frames; the control information comprises at least two control data frames, i.e. the control data stream may comprise two or more control data frames.
Step 302: an image feature of each of the at least two photographed data frames is extracted.
After the terminal acquires the data stream of the image for photographing, the terminal may extract an image feature of each of the at least two photographing data frames, and the terminal may extract one image feature for each photographing data frame or extract a plurality of image features. The image features may include at least one of: color features, texture features, shape features, spatial relationship features.
The color feature is a global feature, which describes surface properties of a scene corresponding to an image or an image region, and a general color feature is based on features of pixel points, and a color histogram, a color set, a color distance, a color aggregation vector, a color correlation diagram and the like are the most commonly used methods for expressing color features.
Texture features, also a global feature, also describe the surface properties of the scene corresponding to an image or image area. Unlike color features, texture features are not based on the characteristics of the pixel points, which requires statistical calculations in regions containing multiple pixel points. Optionally, the commonly used texture feature extraction method may include a statistical method, a geometric method, a model method, a signal processing method, and the like.
Shape features, in general, have two types of representation, one being outline features and the other being region features. The outline features are primarily directed to the outer boundaries of the object, while the area features are primarily directed to the entire shape area. The commonly used shape feature description methods include a boundary feature method, a fourier shape descriptor method, a geometric parameter method, a shape invariant pitch method, and the like.
The spatial relationship characteristic refers to the mutual spatial position or relative direction relationship among a plurality of targets segmented from the image, and these relationships can be divided into a connection/adjacency relationship, an overlapping/overlapping relationship, an inclusion/containment relationship, and the like. Optionally, there may be two methods for extracting the image spatial relationship feature: firstly, automatically segmenting a photographed data frame, dividing an object or color area contained in the photographed data frame, then extracting image characteristics according to the areas, and establishing an index; another method is to divide the photographed data frame uniformly into a plurality of rule sub-blocks, then extract features for each rule sub-block, and establish an index.
Step 303: and selecting a target photographing data frame from the at least two photographing data frames according to the image characteristics of each photographing data frame.
The terminal may select a target photographing data frame from the at least two photographing data frames according to the image feature of each photographing data frame, where the selecting the target photographing data frame may include: the terminal judges whether the image characteristics of each photographing data frame meet preset conditions or not, and selects the photographing data frame meeting the preset conditions from at least two photographing data frames as a target photographing data frame. Wherein, if the image feature includes a feature, the preset condition includes: the characteristic parameter of the characteristic is greater than or equal to a first preset threshold value; or, if the image feature includes a feature, the preset condition includes: the characteristic parameter of the characteristic is less than or equal to a first preset threshold value; or, if the image characteristic includes at least two features, the preset condition includes: the sum of the products of the characteristic parameters of the at least two characteristics and the corresponding preset weights is greater than or equal to a second preset threshold value; or, if the image characteristic includes at least two features, the preset condition includes: the sum of the products of the characteristic parameters of the at least two characteristics and the corresponding preset weights is less than or equal to a second preset threshold value.
For example, the image feature of the photographed data frame extracted by the terminal includes a feature, which may be any one of a color feature, a texture feature, a shape feature and a spatial relationship feature, and a feature parameter corresponding to the image feature of a certain photographed data frame is a, and the first preset threshold is a.
Specifically, if more than two photographing data frames in at least two photographing data frames meet the preset condition according to the method, the photographing data frame with the optimal characteristic parameter in the more than two photographing data frames can be used as the target photographing data frame, that is, the photographing data frame with the optimal effect is selected as the target photographing data frame.
For another example, the image features of the photographed data frame extracted by the terminal include at least two features, the at least two features may include at least two features from among color features, texture features, shape features, and spatial relationship features, and the feature parameters of the at least two features corresponding to a photographed data frame are divided into a1, a2, …, and an, the preset weights corresponding to the at least two features are B1, B2, …, and bn, respectively, n is an integer greater than or equal to 2, B1, B2, …, and bn are numbers greater than or equal to 0 and less than or equal to 1, the second preset threshold may be B, if the sum of the products of the at least two feature parameters and the corresponding preset weights is larger, the effect of the photographed data frame is better, the preset condition is that the sum of the products of the feature parameters of the at least two features and the corresponding preset weights thereof is greater than or equal to the second preset threshold, when a 1B 1+ a 2B 2+ … + bn B + B, it may be determined that the photographed data frame satisfies a preset condition so that the photographed data frame can be taken as a target photographed data frame.
Specifically, if more than two photographed data frames in at least two photographed data frames meet the preset condition according to the above method, the photographed data frame with the optimal product of the sum of the characteristic parameter and the preset weight in the more than two photographed data frames can be used as the target photographed data frame, that is, the photographed data frame with the optimal effect is selected as the target photographed data frame.
Further, the data stream of the image for photographing may further include at least two pieces of sensor information, where each piece of sensor information corresponds to one photographing data frame, and then step 303 may be: and selecting a target photographing data frame from at least two photographing data frames according to the image characteristics of each photographing data frame and the sensor information corresponding to each photographing data frame.
The sensor information may include, but is not limited to, information obtained by monitoring gravity sensing, a gyroscope, and the like, and the sensor information may be used to indicate information such as shake, speed, and angle when the terminal acquires an image for taking a picture. When the terminal selects the target photographing data frame from the at least two photographing data frames, the terminal can judge each photographing data frame by combining the image characteristics of each photographing data frame and the sensor information corresponding to each photographing data frame.
Specifically, the terminal may first select the photo data frame meeting the preset condition according to the method for determining the image characteristics, then determine and select the photo data frame meeting the preset condition according to the sensor information, and use the selected photo data frame as the target photo data frame. The specific method for performing the determination according to the sensor information is similar to the above-described method for performing the determination according to the preset condition of the image feature, and the details of the embodiment of the present invention are not repeated herein. Of course, the terminal may also perform the determination according to the sensor information of each photographed data frame, and then perform the determination according to the image characteristics; or the terminal determines the data frame according to the image characteristics of each photographed data frame and the sensor information, which is not limited in the embodiment of the present invention.
In addition, if two or more photographing data frames simultaneously satisfy the judgment of the image characteristics and the sensor information, the optimal photographing data frame can be used as the target photographing data frame.
Step 304: and processing the at least two control data frames and the target photographing data frame to generate a display image.
When the terminal selects the target photographing data frame, the terminal may process the at least two control data frames and the target photographing data frame, that is, select a control data frame matched with the target photographing data frame from the at least two control data frames according to a timestamp or an identification code, and perform encoding processing on the target photographing data frame according to the matched control data frame to generate a final display image, where a format of the display image may be any image format, such as Jpeg, Jpg, tiff, gif, or the like. The terminal may then display the final display image on a display panel or display.
In practical applications, the terminal executing the above steps 301 to 304 may acquire a data stream of an image for taking a picture by the camera sensor and buffer the data stream in the data buffer as shown in fig. 4. The steps 302 and 303 may be executed by the ISP unit or by a newly added processing unit, and the steps 302 and 303 executed by the newly added processing unit are illustrated in fig. 4 as an example. After the target photographing data frame is selected, step 304 is executed by the ISP unit to generate a display image, which is illustrated in fig. 4 by taking the format of the display image as Jpeg as an example.
In the embodiment of the invention, after the terminal acquires the data stream of the image for photographing, the terminal can extract the image characteristics of each photographing data frame in at least two photographing data frames included in the data stream, so that the target photographing data frame is selected from the at least two photographing data frames based on the image characteristics, and then the at least two control data frames and the target photographing data frame are processed to generate the display image, so that the optimal effect of the display image in the real-time photograph photographing process can be ensured, the poor display effect of the display image is improved, and the user experience is improved.
The above description mainly introduces the scheme provided by the embodiment of the present invention from the perspective of the terminal. It is understood that the terminal includes corresponding hardware structures and/or software modules for performing the respective functions in order to implement the above-described functions. Those of skill in the art will readily appreciate that the various illustrative terminals and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the present invention may perform the division of the functional modules on the terminal according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the embodiments of the present invention is schematic, and is only one division of logic functions, and there may be another division manner in actual implementation.
In the case of dividing the functional modules according to the respective functions, fig. 5 shows a possible structural diagram of the terminal involved in the above embodiment, and the terminal 400 includes: an acquisition unit 401, an extraction unit 402, a selection unit 403, and a processing unit 404. Wherein, the obtaining unit 401 is configured to execute step 301 in fig. 3; the extracting unit 402 is configured to perform step 302 in fig. 3; the selection unit 403 is configured to perform step 303 in fig. 3; the processing unit 404 is configured to perform step 304 in fig. 3. All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
In a hardware implementation, the above-mentioned obtaining unit 401 may be a sensor, the extracting unit 402, the selecting unit 403, and the processing unit 404 may be a processor.
Fig. 6 is a schematic diagram illustrating a possible logical structure of the terminal according to the above-described embodiment, according to an embodiment of the present invention. The terminal 410 includes: memory 411, processor 412, sensors 413, communication interface 414, and bus 415. The memory 411, the processor 412, the sensor 413, and the communication interface 414 are connected to each other by a bus 414. In an embodiment of the invention, processor 412 is configured to control and manage the actions of terminal 410, e.g., processor 412 is configured to perform steps 302-304 in fig. 3, and/or other processes for the techniques described herein. Sensor 413 is used to perform step 301 shown in fig. 3. The communication interface 414 is used for supporting the communication of the terminal 410. A memory 311 for storing program codes and data of the terminal 410.
The processor 412 may be, among other things, a central processing unit, a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, transistor logic, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, a digital signal processor and a microprocessor, or the like. The sensors 413 may include camera sensors, gravity sensing sensors, gyroscopes, and the like. The bus 415 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 6, but this is not intended to represent only one bus or type of bus.
In another embodiment of the present invention, a computer-readable storage medium is also provided, in which computer-executable instructions are stored, and when the computer-executable instructions are executed by at least one processor of a device, the device executes the data processing method shown in fig. 3.
In another embodiment of the present invention, there is also provided a computer program product comprising computer executable instructions stored in a computer readable storage medium; the computer executable instructions may be read by at least one processor of the device from a computer readable storage medium, and execution of the computer executable instructions by the at least one processor causes the device to implement the data processing method shown in fig. 3.
In the embodiment of the invention, after the data stream of the image for photographing is acquired, the image characteristics of each photographing data frame in at least two photographing data frames included in the data stream can be extracted, so that the target photographing data frame is selected from the at least two photographing data frames based on the image characteristics, and then the at least two control data frames and the target photographing data frame are processed to generate the display image, so that the quality of the display image in the real-time photographing process can be ensured, the display effect is improved, and the user experience is improved.
Finally, it should be noted that: the above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (7)

1. A method of data processing, the method comprising:
acquiring a data stream of an image for photographing, wherein the data stream of the image for photographing comprises at least two photographing data frames and at least two control data frames;
extracting the image characteristics of each photographing data frame in the at least two photographing data frames;
selecting a target photographing data frame from the at least two photographing data frames according to the image characteristics of each photographing data frame;
processing the at least two control data frames and the target photographing data frame to generate a display image;
selecting a target photographing data frame from the at least two photographing data frames according to the image characteristics of each photographing data frame, including:
judging whether the image characteristics of each photographed data frame meet preset conditions or not; if the image feature comprises a feature, the preset condition comprises that a feature parameter of the feature is greater than or equal to a first preset threshold; or, if the image feature includes a feature, the preset condition includes that a feature parameter of the feature is less than or equal to a first preset threshold; or, if the image characteristic includes at least two features, the preset condition includes that the sum of products of feature parameters of the at least two features and corresponding preset weights is greater than or equal to a second preset threshold; or, if the image characteristic includes at least two features, the preset condition includes that the sum of products of feature parameters of the at least two features and corresponding preset weights is less than or equal to a second preset threshold;
and selecting the photographing data frame meeting the preset condition from the at least two photographing data frames as a target photographing data frame.
2. The method of claim 1, wherein the image features comprise at least one of: color features, texture features, shape features, spatial relationship features.
3. The method of claim 1 or 2, wherein the data stream of the photographed image further comprises at least two sensor information, each sensor information corresponds to one photographing data frame, and the selecting the target photographing data frame from the at least two photographing data frames according to the image feature of each photographing data frame comprises:
and selecting a target photographing data frame from the at least two photographing data frames according to the image characteristics of each photographing data frame and the sensor information corresponding to each photographing data frame.
4. A terminal, characterized in that the terminal comprises:
the device comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring a data stream of an image for photographing, and the data stream of the image for photographing comprises at least two photographing data frames and at least two control data frames;
the extraction unit is used for extracting the image characteristics of each photographing data frame in the at least two photographing data frames;
the selection unit is used for selecting a target photographing data frame from the at least two photographing data frames according to the image characteristics of each photographing data frame;
the processing unit is used for processing the at least two control data frames and the target photographing data frame to generate a display image;
the selection unit is specifically configured to:
judging whether the image characteristics of each photographed data frame meet preset conditions or not; if the image feature comprises a feature, the preset condition comprises that a feature parameter of the feature is greater than or equal to a first preset threshold; or, if the image feature includes a feature, the preset condition includes that a feature parameter of the feature is less than or equal to a first preset threshold; or, if the image characteristic includes at least two features, the preset condition includes that the sum of products of feature parameters of the at least two features and corresponding preset weights is greater than or equal to a second preset threshold; if the image characteristics comprise at least two features, the preset condition comprises that the sum of products of feature parameters of the at least two features and corresponding preset weights is less than or equal to a second preset threshold value;
and selecting the photographing data frame meeting the preset condition from the at least two photographing data frames as a target photographing data frame.
5. The terminal of claim 4, wherein the image features comprise at least one of: color features, texture features, shape features, spatial relationship features.
6. The terminal according to claim 4 or 5, wherein the data stream of the photographed image further includes at least two pieces of sensor information, each piece of sensor information corresponds to one photographed data frame, and the selecting unit is specifically configured to:
and selecting a target photographing data frame from the at least two photographing data frames according to the image characteristics of each photographing data frame and the sensor information corresponding to each photographing data frame.
7. A terminal, characterized in that the terminal comprises a processor, a memory, the memory storing code and data, the processor executing the code in the memory to make the processor execute the data processing method of any of the preceding claims 1-3.
CN201780005199.9A 2016-11-30 2017-03-16 Data processing method and terminal Active CN108431867B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN2016110822529 2016-11-30
CN201611082252 2016-11-30
PCT/CN2017/076944 WO2018098931A1 (en) 2016-11-30 2017-03-16 Method and device for data processing

Publications (2)

Publication Number Publication Date
CN108431867A CN108431867A (en) 2018-08-21
CN108431867B true CN108431867B (en) 2020-12-08

Family

ID=62242709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780005199.9A Active CN108431867B (en) 2016-11-30 2017-03-16 Data processing method and terminal

Country Status (2)

Country Link
CN (1) CN108431867B (en)
WO (1) WO2018098931A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110807789A (en) * 2019-08-23 2020-02-18 腾讯科技(深圳)有限公司 Image processing method, model, device, electronic equipment and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103686042A (en) * 2012-09-25 2014-03-26 三星电子株式会社 Method and apparatus for image data processing, and electronic device including the apparatus
CN104869305A (en) * 2014-02-20 2015-08-26 三星电子株式会社 Method for processing image data and apparatus for the same

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005202469A (en) * 2004-01-13 2005-07-28 Fuji Xerox Co Ltd Image processor, image processing method and program
JP4811433B2 (en) * 2007-09-05 2011-11-09 ソニー株式会社 Image selection apparatus, image selection method, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103686042A (en) * 2012-09-25 2014-03-26 三星电子株式会社 Method and apparatus for image data processing, and electronic device including the apparatus
CN104869305A (en) * 2014-02-20 2015-08-26 三星电子株式会社 Method for processing image data and apparatus for the same

Also Published As

Publication number Publication date
WO2018098931A1 (en) 2018-06-07
CN108431867A (en) 2018-08-21

Similar Documents

Publication Publication Date Title
CN108898567B (en) Image noise reduction method, device and system
CN109068058B (en) Shooting control method and device in super night scene mode and electronic equipment
US20170032219A1 (en) Methods and devices for picture processing
WO2017054442A1 (en) Image information recognition processing method and device, and computer storage medium
CN107704798B (en) Image blurring method and device, computer readable storage medium and computer device
CN110796041B (en) Principal identification method and apparatus, electronic device, and computer-readable storage medium
TW201433162A (en) Electronic device and image selection method thereof
WO2022160857A1 (en) Image processing method and apparatus, and computer-readable storage medium and electronic device
CN105391940B (en) A kind of image recommendation method and device
CN108513069B (en) Image processing method, image processing device, storage medium and electronic equipment
CN111131688B (en) Image processing method and device and mobile terminal
CN110660090A (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
US9445073B2 (en) Image processing methods and systems in accordance with depth information
CN103402058A (en) Shot image processing method and device
CN116582653B (en) Intelligent video monitoring method and system based on multi-camera data fusion
CN114096994A (en) Image alignment method and device, electronic equipment and storage medium
CN109981989B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN108259767B (en) Image processing method, image processing device, storage medium and electronic equipment
CN107979729B (en) Method and equipment for displaying preview image
CN108431867B (en) Data processing method and terminal
CN111885371A (en) Image occlusion detection method and device, electronic equipment and computer readable medium
CN108810407B (en) Image processing method, mobile terminal and computer readable storage medium
CN106851099A (en) The method and mobile terminal of a kind of shooting
CN108495038B (en) Image processing method, image processing device, storage medium and electronic equipment
WO2023235043A1 (en) Image frame selection for multi-frame fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant