CN105302872A - Image processing device and method - Google Patents

Image processing device and method Download PDF

Info

Publication number
CN105302872A
CN105302872A CN201510644164.2A CN201510644164A CN105302872A CN 105302872 A CN105302872 A CN 105302872A CN 201510644164 A CN201510644164 A CN 201510644164A CN 105302872 A CN105302872 A CN 105302872A
Authority
CN
China
Prior art keywords
image
scene
content information
image processing
scene content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510644164.2A
Other languages
Chinese (zh)
Inventor
戴向东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201510644164.2A priority Critical patent/CN105302872A/en
Publication of CN105302872A publication Critical patent/CN105302872A/en
Priority to PCT/CN2016/099865 priority patent/WO2017054676A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5846Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using extracted text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image processing device and method. The image processing device comprises a scene identification module and a writing module, wherein the scene identification module is used for carrying out scene identification on an image according to a preset scene identification parameter to generate the scene content information of the image, wherein the scene content information is character information which describes the scene characteristics of the image; the writing module is used for writing the scene content information into the file attribute of the image. Therefore, the file attribute of the image not only comprises shooting parameter information and also comprises the scene content information of the image, a new attribute is added for the image, and therefore, the file attribute of the image is rich and comprehensive. After a user obtains the image, the user does not need to open the image, instead, the user can obtain the specific contents of the image by directly checking the file attribute of the image, and therefore, the user can obtain rich image information according to the file attribute of the image so as to bring convenience for the user to quickly browse or screen images.

Description

Image processing apparatus and method
Technical field
The present invention relates to communication technical field, particularly relate to a kind of image processing apparatus and method.
Background technology
In the mobile Internet epoch, the image data amount of mobile terminal presents explosive growth.During mobile terminal shooting image, the acquisition parameters information such as meeting automatic recording chart image height degree, time shutter, data bits, shooting geographic position, and be written in the file attribute of image.User by checking the file attribute of image, can understand essential information during image taking.But, for the particular content of image, then cannot be understood by file attribute, image file must be opened and check acquisition with human eye subjectivity.Therefore, the information that the file attribute of conventional images comprises is abundant not and comprehensive, and user can not utilize the file attribute of image to carry out fast browsing or screening to image.
Summary of the invention
Fundamental purpose of the present invention is to propose a kind of image processing apparatus and method, and be intended to for image increases a kind of new attribute, the information that the file attribute of image is comprised is more abundant and comprehensive.
For achieving the above object, the present invention proposes a kind of image processing apparatus, comprising:
Scene Recognition module, for carrying out scene Recognition according to preset scene Recognition parameter to image, generates the scene content information of described image, and described scene content information is the Word message of the scene characteristic describing described image;
Writing module, for writing described scene content information in the file attribute of described image.
Further, also comprise image processing module, described image processing module is used for: process described image according to described scene content information.
Further, described image processing module comprises taxon, and described taxon is used for: classify to described image according to described scene content information.
Further, described image processing module comprises annotation unit, and described annotation unit is used for: when issuing described image, generates annotation information according to described scene content information.
Further, described image processing module comprises optimization unit, and described optimization unit is used for: be optimized process according to described scene content information to described image.
Further, described scene Recognition module is used for: when shooting image or obtain an image from outside after, carry out scene Recognition immediately to described image.
Further, described image processing apparatus also comprises degree of depth study module, and described degree of depth study module is used for: utilize large data to carry out degree of depth study, and training can the scene Recognition parameter of scene characteristic of resolution image.
The present invention proposes a kind of image processing method simultaneously, comprises step:
Carry out scene Recognition according to preset scene Recognition parameter to image, generate the scene content information of described image, described scene content information is the Word message of the scene characteristic describing described image;
Described scene content information is write in the file attribute of described image.
Further, also comprise after the described step described scene content information write in the file attribute of described image:
According to described scene content information, described image is processed.
Further, describedly according to described scene content information, process is carried out to described image and comprise: according to described scene content information, described image is classified.
Further, describedly according to described scene content information, process being carried out to described image and comprise: when issuing described image, generating annotation information according to described scene content information.
Further, describedly according to described scene content information, process is carried out to described image and comprise: according to described scene content information, process is optimized to described image.
Further, described method also comprises: when shooting image or obtain an image from outside after, carry out scene Recognition immediately to described image
Further, the scene Recognition parameter that described basis is preset also comprises before image being carried out to the step of scene Recognition: utilize large data to carry out degree of depth study, and training can the scene Recognition parameter of scene characteristic of resolution image.
A kind of image processing apparatus proposed by the invention, by to image scene identification, generating scene content information is also written to this series of processes in the file attribute of image, make in the file attribute of image, not only to comprise the acquisition parameters information such as picture altitude, time shutter, data bits, shooting geographic position, also comprise the scene content information of image, for image adds a kind of new attribute, make the file attribute of image more abundant and comprehensive.After terminal user or third party user obtain image, without the need to opening image, directly check that the file attribute of image just can obtain the particular content of image, make user can obtain abundanter image information according to the file attribute of image, facilitate user's fast browsing or screening image.
Meanwhile, the scene content information of image can also be utilized to do further process to image.Such as: utilize the scene content information of image automatically to classify to image, a kind of new Images Classification mode is provided; Utilize the scene content information of image, while issue image, automatically generate the particular content that annotation information carrys out interpretation of images, eliminate user and manually input, experience for user provides a kind of new images share; Utilize the scene content information of image to be automatically optimized process to image, make optimization process more targetedly with more accurate, enhance the visual effect of image.
Accompanying drawing explanation
Fig. 1 is the hardware configuration schematic diagram of the mobile terminal realizing each embodiment of the present invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is the process flow diagram of image processing method first embodiment of the present invention;
Fig. 4 is the schematic diagram in the embodiment of the present invention, an image being carried out to scene Recognition;
Fig. 5 is the schematic diagram in the embodiment of the present invention, another image being carried out to scene Recognition;
Fig. 6 is the process flow diagram of image processing method second embodiment of the present invention;
Fig. 7 is that in the embodiment of the present invention, convolutional neural networks carries out Classification and Identification schematic diagram to scene content;
Fig. 8 is the process flow diagram of image processing method of the present invention 3rd embodiment;
Fig. 9 is the process flow diagram of image processing method of the present invention 4th embodiment;
Figure 10 is the module diagram of image processing apparatus first embodiment of the present invention;
Figure 11 is the module diagram of image processing apparatus second embodiment of the present invention;
Figure 12 is the module diagram of image processing apparatus of the present invention 3rd embodiment;
Figure 13 is the module diagram of the image processing module in Figure 12.
The realization of the object of the invention, functional characteristics and advantage will in conjunction with the embodiments, are described further with reference to accompanying drawing.
Embodiment
Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
The mobile terminal realizing each embodiment of the present invention is described referring now to accompanying drawing.In follow-up description, use the suffix of such as " module ", " parts " or " unit " for representing element only in order to be conducive to explanation of the present invention, itself is specific meaning not.Therefore, " module " and " parts " can mixedly use.
Mobile terminal can be implemented in a variety of manners.Such as, the terminal described in the present invention can comprise the such as mobile terminal of mobile phone, smart phone, notebook computer, digit broadcasting receiver, PDA (personal digital assistant), PAD (panel computer), PMP (portable media player), guider etc. and the fixed terminal of such as digital TV, desk-top computer etc.Below, suppose that terminal is mobile terminal.But it will be appreciated by those skilled in the art that except the element except being used in particular for mobile object, structure according to the embodiment of the present invention also can be applied to the terminal of fixed type.
Fig. 1 is the hardware configuration signal of the mobile terminal realizing each embodiment of the present invention.
Mobile terminal 100 can comprise wireless communication unit 110, A/V (audio/video) input block 120, user input unit 130, sensing cell 140, output unit 150, storer 160, interface unit 170, controller 180 and power supply unit 190 etc.Fig. 1 shows the mobile terminal with various assembly, it should be understood that, does not require to implement all assemblies illustrated.Can alternatively implement more or less assembly.Will be discussed in more detail below the element of mobile terminal.
Wireless communication unit 110 generally includes one or more assembly, and it allows the wireless communication between mobile terminal 100 and wireless communication system or network.Such as, wireless communication unit can comprise at least one in broadcast reception module 111, mobile communication module 112, wireless Internet module 113, short range communication module 114 and positional information module 115.
Broadcast reception module 111 via broadcast channel from external broadcasting management server receiving broadcast signal and/or broadcast related information.Broadcast channel can comprise satellite channel and/or terrestrial channel.Broadcast management server can be generate and send the server of broadcast singal and/or broadcast related information or the broadcast singal generated before receiving and/or broadcast related information and send it to the server of terminal.Broadcast singal can comprise TV broadcast singal, radio signals, data broadcasting signal etc.And broadcast singal may further include the broadcast singal combined with TV or radio signals.Broadcast related information also can provide via mobile communications network, and in this case, broadcast related information can be received by mobile communication module 112.Broadcast singal can exist in a variety of manners, such as, it can exist with the form of the electronic service guidebooks (ESG) of the electronic program guides of DMB (DMB) (EPG), digital video broadcast-handheld (DVB-H) etc.Broadcast reception module 111 can by using the broadcast of various types of broadcast system Received signal strength.Especially, broadcast reception module 111 can by using such as multimedia broadcasting-ground (DMB-T), DMB-satellite (DMB-S), digital video broadcasting-hand-held (DVB-H), forward link media (MediaFLO ) Radio Data System, received terrestrial digital broadcasting integrated service (ISDB-T) etc. digit broadcasting system receive digital broadcasting.Broadcast reception module 111 can be constructed to be applicable to providing the various broadcast system of broadcast singal and above-mentioned digit broadcasting system.The broadcast singal received via broadcast reception module 111 and/or broadcast related information can be stored in storer 160 (or storage medium of other type).
Radio signal is sent at least one in base station (such as, access point, Node B etc.), exterior terminal and server and/or receives radio signals from it by mobile communication module 112.Various types of data that such radio signal can comprise voice call signal, video calling signal or send according to text and/or Multimedia Message and/or receive.
Wireless Internet module 113 supports the Wi-Fi (Wireless Internet Access) of mobile terminal.This module can be inner or be externally couple to terminal.Wi-Fi (Wireless Internet Access) technology involved by this module can comprise WLAN (WLAN) (Wi-Fi), Wibro (WiMAX), Wimax (worldwide interoperability for microwave access), HSDPA (high-speed downlink packet access) etc.
Short range communication module 114 is the modules for supporting junction service.Some examples of short-range communication technology comprise bluetooth tM, radio-frequency (RF) identification (RFID), Infrared Data Association (IrDA), ultra broadband (UWB), purple honeybee tMetc..
Positional information module 115 is the modules of positional information for checking or obtain mobile terminal.The typical case of positional information module is GPS (GPS).According to current technology, GPS module 115 calculates from the range information of three or more satellite and correct time information and for the Information application triangulation calculated, thus calculates three-dimensional current location information according to longitude, latitude and pin-point accuracy.Current, the method for calculating position and temporal information uses three satellites and by using the error of the position that goes out of an other satellite correction calculation and temporal information.In addition, GPS module 115 can carry out computing velocity information by Continuous plus current location information in real time.
A/V input block 120 is for audio reception or vision signal.A/V input block 120 can comprise camera 121 and microphone 1220, and the view data of camera 121 to the static images obtained by image capture apparatus in Video Capture pattern or image capture mode or video processes.Picture frame after process may be displayed on display module 151.Picture frame after camera 121 processes can be stored in storer 160 (or other storage medium) or via wireless communication unit 110 and send, and can provide two or more cameras 1210 according to the structure of mobile terminal.Such acoustic processing can via microphones sound (voice data) in telephone calling model, logging mode, speech recognition mode etc. operational mode, and can be voice data by microphone 122.Audio frequency (voice) data after process can be converted to the formatted output that can be sent to mobile communication base station via mobile communication module 112 when telephone calling model.Microphone 122 can be implemented various types of noise and eliminate (or suppress) algorithm and receiving and sending to eliminate (or suppression) noise or interference that produce in the process of sound signal.
User input unit 130 can generate key input data to control the various operations of mobile terminal according to the order of user's input.User input unit 130 allows user to input various types of information, and keyboard, the young sheet of pot, touch pad (such as, detecting the touch-sensitive assembly of the change of the resistance, pressure, electric capacity etc. that cause owing to being touched), roller, rocking bar etc. can be comprised.Especially, when touch pad is superimposed upon on display module 151 as a layer, touch-screen can be formed.
Sensing cell 140 detects the current state of mobile terminal 100, (such as, mobile terminal 100 open or close state), the position of mobile terminal 100, user for mobile terminal 100 contact (namely, touch input) presence or absence, the orientation of mobile terminal 100, the acceleration or deceleration of mobile terminal 100 move and direction etc., and generate order or the signal of the operation for controlling mobile terminal 100.Such as, when mobile terminal 100 is embodied as sliding-type mobile phone, sensing cell 140 can sense this sliding-type phone and open or close.In addition, whether whether sensing cell 140 can detect power supply unit 190 provides electric power or interface unit 170 to couple with external device (ED).Sensing cell 140 can comprise proximity transducer 1410 and will be described this in conjunction with touch-screen below.
Interface unit 170 is used as at least one external device (ED) and is connected the interface that can pass through with mobile terminal 100.Such as, external device (ED) can comprise wired or wireless head-band earphone port, external power source (or battery charger) port, wired or wireless FPDP, memory card port, for connecting the port, audio frequency I/O (I/O) port, video i/o port, ear port etc. of the device with identification module.Identification module can be that storage uses the various information of mobile terminal 100 for authentication of users and can comprise subscriber identification module (UIM), client identification module (SIM), Universal Subscriber identification module (USIM) etc.In addition, the device (hereinafter referred to " recognition device ") with identification module can take the form of smart card, and therefore, recognition device can be connected with mobile terminal 100 via port or other coupling arrangement.Interface unit 170 may be used for receive from external device (ED) input (such as, data message, electric power etc.) and the input received be transferred to the one or more element in mobile terminal 100 or may be used for transmitting data between mobile terminal and external device (ED).
In addition, when mobile terminal 100 is connected with external base, interface unit 170 can be used as to allow by it electric power to be provided to the path of mobile terminal 100 from base or can be used as the path that allows to be transferred to mobile terminal by it from the various command signals of base input.The various command signal inputted from base or electric power can be used as and identify whether mobile terminal is arranged on the signal base exactly.Output unit 150 is constructed to provide output signal (such as, sound signal, vision signal, alarm signal, vibration signal etc.) with vision, audio frequency and/or tactile manner.Output unit 150 can comprise display module 151, dio Output Modules 152, alarm modules 153 etc.
Display module 151 may be displayed on the information of process in mobile terminal 100.Such as, when mobile terminal 100 is in telephone calling model, display module 151 can show with call or other communicate (such as, text messaging, multimedia file are downloaded etc.) be correlated with user interface (UI) or graphic user interface (GUI).When mobile terminal 100 is in video calling pattern or image capture mode, display module 151 can the image of display capture and/or the image of reception, UI or GUI that video or image and correlation function are shown etc.
Meanwhile, when display module 151 and touch pad as a layer superposed on one another to form touch-screen time, display module 151 can be used as input media and output unit.Display module 151 can comprise at least one in liquid crystal display (LCD), thin film transistor (TFT) LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc.Some in these displays can be constructed to transparence and watch from outside to allow user, and this can be called transparent display, and typical transparent display can be such as TOLED (transparent organic light emitting diode) display etc.According to the specific embodiment wanted, mobile terminal 100 can comprise two or more display modules (or other display device), such as, mobile terminal can comprise outside display module (not shown) and inner display module (not shown).Touch-screen can be used for detecting touch input pressure and touch input position and touch and inputs area.
When dio Output Modules 152 can be under the isotypes such as call signal receiving mode, call mode, logging mode, speech recognition mode, broadcast reception mode at mobile terminal, voice data convert audio signals that is that wireless communication unit 110 is received or that store in storer 160 and exporting as sound.And dio Output Modules 152 can provide the audio frequency relevant to the specific function that mobile terminal 100 performs to export (such as, call signal receives sound, message sink sound etc.).Dio Output Modules 152 can comprise loudspeaker, hummer etc.
Alarm modules 153 can provide and export that event informed to mobile terminal 100.Typical event can comprise calling reception, message sink, key signals input, touch input etc.Except audio or video exports, alarm modules 153 can provide in a different manner and export with the generation of notification event.Such as, alarm modules 153 can provide output with the form of vibration, when receive calling, message or some other enter communication (incomingcommunication) time, alarm modules 153 can provide sense of touch to export (that is, vibrating) to notify to user.By providing such sense of touch to export, even if when the mobile phone of user is in the pocket of user, user also can identify the generation of various event.Alarm modules 153 also can provide the output of the generation of notification event via display module 151 or dio Output Modules 152.
Storer 160 software program that can store process and the control operation performed by controller 180 etc., or temporarily can store oneself through exporting the data (such as, telephone directory, message, still image, video etc.) that maybe will export.And, storer 160 can store about when touch be applied to touch-screen time the vibration of various modes that exports and the data of sound signal.
Storer 160 can comprise the storage medium of at least one type, described storage medium comprises flash memory, hard disk, multimedia card, card-type storer (such as, SD or DX storer etc.), random access storage device (RAM), static random-access memory (SRAM), ROM (read-only memory) (ROM), Electrically Erasable Read Only Memory (EEPROM), programmable read only memory (PROM), magnetic storage, disk, CD etc.And mobile terminal 100 can be connected the memory function of execute store 160 network storage device with by network cooperates.
Controller 180 controls the overall operation of mobile terminal usually.Such as, controller 180 performs the control relevant to voice call, data communication, video calling etc. and process.In addition, controller 180 can comprise the multi-media module 1810 for reproducing (or playback) multi-medium data, and multi-media module 1810 can be configured in controller 180, or can be configured to be separated with controller 180.Controller 180 can pattern recognition process, is identified as character or image so that input is drawn in the handwriting input performed on the touchscreen or picture.
Power supply unit 190 receives external power or internal power and provides each element of operation and the suitable electric power needed for assembly under the control of controller 180.
Various embodiment described herein can to use such as computer software, the computer-readable medium of hardware or its any combination implements.For hardware implementation, embodiment described herein can by using application-specific IC (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), processor, controller, microcontroller, microprocessor, being designed at least one performed in the electronic unit of function described herein and implementing, in some cases, such embodiment can be implemented in controller 180.For implement software, the embodiment of such as process or function can be implemented with allowing the independent software module performing at least one function or operation.Software code can be implemented by the software application (or program) write with any suitable programming language, and software code can be stored in storer 160 and to be performed by controller 180.
So far, oneself is through the mobile terminal according to its functional description.Below, for the sake of brevity, by the slide type mobile terminal that describes in various types of mobile terminals of such as folded form, board-type, oscillating-type, slide type mobile terminal etc. exemplarily.Therefore, the present invention can be applied to the mobile terminal of any type, and is not limited to slide type mobile terminal.
Mobile terminal 100 as shown in Figure 1 can be constructed to utilize and send the such as wired and wireless communication system of data via frame or grouping and satellite-based communication system operates.
Describe wherein according to the communication system that mobile terminal of the present invention can operate referring now to Fig. 2.
Such communication system can use different air interfaces and/or Physical layer.Such as, the air interface used by communication system comprises such as frequency division multiple access (FDMA), time division multiple access (TDMA) (TDMA), CDMA (CDMA) and universal mobile telecommunications system (UMTS) (especially, Long Term Evolution (LTE)), global system for mobile communications (GSM) etc.As non-limiting example, description below relates to cdma communication system, but such instruction is equally applicable to the system of other type.
With reference to figure 2, cdma wireless communication system can comprise multiple mobile terminal 100, multiple base station (BS) 270, base station controller (BSC) 275 and mobile switching centre (MSC) 280.MSC280 is constructed to form interface with Public Switched Telephony Network (PSTN) 290.MSC280 is also constructed to form interface with the BSC275 that can be couple to base station 270 via back haul link.Back haul link can construct according to any one in some interfaces that oneself knows, described interface comprises such as E1/T1, ATM, IP, PPP, frame relay, HDSL, ADSL or xDSL.Will be appreciated that system as shown in Figure 2 can comprise multiple BSC2750.
Each BS270 can serve one or more subregion (or region), by multidirectional antenna or point to specific direction each subregion of antenna cover radially away from BS270.Or each subregion can by two or more antenna covers for diversity reception.Each BS270 can be constructed to support multiple parallel compensate, and each parallel compensate has specific frequency spectrum (such as, 1.25MHz, 5MHz etc.).
Subregion can be called as CDMA Channel with intersecting of parallel compensate.BS270 also can be called as base station transceiver subsystem (BTS) or other equivalent terms.Under these circumstances, term " base station " may be used for broadly representing single BSC275 and at least one BS270.Base station also can be called as " cellular station ".Or each subregion of particular B S270 can be called as multiple cellular station.
As shown in Figure 2, broadcast singal is sent to the mobile terminal 100 at operate within systems by broadcsting transmitter (BT) 295.Broadcast reception module 111 as shown in Figure 1 is arranged on mobile terminal 100 and sentences the broadcast singal receiving and sent by BT295.In fig. 2, several GPS (GPS) satellite 300 is shown.Satellite 300 helps at least one in the multiple mobile terminal 100 in location.
In fig. 2, depict multiple satellite 300, but understand, the satellite of any number can be utilized to obtain useful locating information.GPS module 115 as shown in Figure 1 is constructed to coordinate to obtain the locating information wanted with satellite 300 usually.Substitute GPS tracking technique or outside GPS tracking technique, can use can other technology of position of tracking mobile terminal.In addition, at least one gps satellite 300 optionally or extraly can process satellite dmb transmission.
As a typical operation of wireless communication system, BS270 receives the reverse link signal from various mobile terminal 100.Mobile terminal 100 participates in call usually, information receiving and transmitting communicates with other type.Each reverse link signal that certain base station 270 receives is processed by particular B S270.The data obtained are forwarded to relevant BSC275.BSC provides call Resourse Distribute and comprises the mobile management function of coordination of the soft switching process between BS270.The data received also are routed to MSC280 by BSC275, and it is provided for the extra route service forming interface with PSTN290.Similarly, PSTN290 and MSC280 forms interface, and MSC and BSC275 forms interface, and BSC275 correspondingly control BS270 so that forward link signals is sent to mobile terminal 100.
Based on above-mentioned mobile terminal hardware configuration and communication system, each embodiment of image processing method of the present invention is proposed
As shown in Figure 3, propose image processing method first embodiment of the present invention, said method comprising the steps of:
S11, according to preset scene Recognition parameter, scene Recognition is carried out to image, generate the scene content information of this image.
Concrete, when after terminal taking image or after obtaining an image from outside, degree of depth learning art is utilized to carry out scene Recognition according to scene Recognition parameter to this image immediately, generate the scene content information of this image, namely this scene content information describe the Word message of the scene characteristic of this image.Wherein, obtain image from outside, comprise and download image from network, or receive the image of external unit transmission.
Described scene Recognition parameter can tell the scene characteristic of image, and scene Recognition parameter can directly obtain from outside originally be stored in this locality, also large data can be utilized to carry out degree of depth study by terminal and train and draw.Show that the mode of scene Recognition parameter will describe in detail in next embodiment by degree of depth learning training.
Described scene content information comprises the content tab of image, the coordinate position, content associated information etc. of pixel, in other words, at least comprise the object in image, can also comprise the location layout, attributive character etc. of image background, each object, described attributive character is as color, kind, shape and related information etc.
As shown in Figure 4, utilize degree of depth learning art and according to scene Recognition parameter, scene Recognition carried out to Fig. 4, detect that the object in image is strawberry, and the attributive character information such as the color of strawberry, affiliated food species, nutrient health, finally generate following scene content information: strawberry, food, organic plant, fruit, berry, nutrition, health, fresh, red, grass green, feature etc.
As shown in Figure 5, utilize degree of depth learning art and according to scene Recognition parameter, scene Recognition carried out to Fig. 5, detect that image upper right quarter is blue sky, lower left quarter is dome rock russet, centre is green trees, entirety is shown as a dry view, thus generates following scene content information: a slice background is bronzing rock dome and blue sky, and prospect is the dry view of the little grass of some greeneries, shrub and light brown.
S12, by scene content information write image file attribute in.
Thus, by to image scene identification, generating scene content information is also written to this series of processes in the file attribute of image, make in the file attribute of image, not only to comprise the acquisition parameters information such as picture altitude, time shutter, data bits, shooting geographic position, also comprise the scene content information of image, for image adds a kind of new attribute.After terminal user or third party user obtain image, without the need to opening image, directly check that the file attribute of image just can obtain the particular content of image, make user can obtain abundanter image information according to the file attribute of image, facilitate user's fast browsing or screening image.
In addition, terminal user or third party user can also utilize the scene content information of image to be further processed image, and concrete processing procedure describes in detail in the embodiment below.
As shown in Figure 6, propose image processing method second embodiment of the present invention, said method comprising the steps of:
S21, utilize large data to carry out degree of depth study, training can the scene Recognition parameter of scene characteristic of resolution image.
Degree of depth study is one of most important breakthrough of obtaining of artificial intelligence field nearly ten years, and it all achieves immense success at speech recognition, natural language processing, computer vision, image and the numerous areas such as video analysis, multimedia.Degree of depth study is a kind of method of in machine learning field, pattern (sound, image etc.) being carried out to modeling, it is also a kind of probability model of Corpus--based Method, after modeling is carried out to various pattern, just can identify various pattern, such as when the pattern of modeling is sound, this identification just can be understood as speech recognition.
The concept of degree of depth study comes from the research of artificial neural network, and the multilayer perceptron containing many hidden layers is exactly a kind of degree of depth study structure.Degree of depth study forms more abstract high level by combination low-level feature and represents attribute classification or feature, to find that the distributed nature of data represents.In order to carry out the identification of certain pattern, first common way is in some way, extracts the feature in this pattern.The extracting mode of this feature is sometimes engineer or specifies, and is sometimes under given relatively multidata prerequisite, oneself is summed up out by computing machine.Degree of depth study proposes a kind of method allowing computing machine automatic learning exit pattern feature, and has been dissolved into by feature learning in the process of Modling model, thus decreases the incompleteness that artificial design feature causes.
Although degree of depth study can the feature of mode of learning automatically, and can reach good accuracy of identification, the prerequisite of this algorithm work is, the data of magnitude that user can provide " quite large ".That is under the application scenarios that can only provide limited data volume, degree of depth learning algorithm just can not carry out agonic have estimated to the rule of data, therefore may be not so good as some existing simple algorithms on recognition effect.
At present along with the rise of large data, the terminal device voice that particularly mobile terminal is a large amount of and view data are that degree of depth study provides Data Source endlessly, specific in image scene identification, first degree of depth study utilize large data platform to collect the characteristic of different scene, then these these characteristics are input in convolutional neural networks, carry out the various features of the different scene of automatic learning, train the nonlinear characteristic combination parameter of these different scenes of classification, i.e. scene identification parameter, in concrete scene Recognition, these scene Recognition parameters just can be utilized afterwards to go to identify different scenes, tell the different background in scene, the attributive character of object and object.
As shown in Figure 7, in degree of depth learning process, convolutional neural networks is to the Classification and Identification process of scene content: first, input picture; Then, subregion is extracted; Then, convolutional neural networks feature is calculated; Finally, territorial classification is carried out.
S22, according to preset scene Recognition parameter, scene Recognition is carried out to image, generate the scene content information of this image.
Concrete, when after terminal taking image or after obtaining an image from outside, degree of depth learning art is utilized to carry out scene Recognition according to scene Recognition parameter to this image immediately, generate the scene content information of this image, namely this scene content information describe the Word message of the scene characteristic of this image.Wherein, obtain image from outside, comprise and download image from network, or receive the image of external unit transmission.
Described scene content information comprises the content tab of image, the coordinate position, content associated information etc. of pixel, in other words, at least comprise the object in image, can also comprise the location layout, attributive character etc. of image background, each object, described attributive character is as color, kind, shape and related information etc.
As shown in Figure 4, utilize degree of depth learning art and according to scene Recognition parameter, scene Recognition carried out to Fig. 4, detect that the object in image is strawberry, and the attributive character information such as the color of strawberry, affiliated food species, nutrient health, finally generate following scene content information: strawberry, food, organic plant, fruit, berry, nutrition, health, fresh, red, grass green, feature etc.
As shown in Figure 5, utilize degree of depth learning art and according to scene Recognition parameter, scene Recognition carried out to Fig. 5, detect that image upper right quarter is blue sky, lower left quarter is dome rock russet, centre is green trees, entirety is shown as a dry view, thus generates following scene content information: a slice background is bronzing rock dome and blue sky, and prospect is the dry view of the little grass of some greeneries, shrub and light brown.
S23, by scene content information write image file attribute in.
S24, according to scene content information, image to be classified.
The present embodiment utilizes scene content information to carry out classification process to image.At present, common image is generally classified according to attributes such as time, spot for photography, image sizes.In the present embodiment, after terminal is by the file attribute of the scene content information of image write image, the scene characteristic that this image comprises is gone out immediately according to scene content information analysis, according to these features, image is classified according to vision content, such as, be divided into different classifications according to landscape, portrait, animal, food, weather, environment.
For example, according to the scene content information of Fig. 4, can be food, fruits, strawberry class, feature class etc. by Images Classification; According to the scene content information of Fig. 5, can be landscape class, dry view class etc. by Images Classification.
In addition, when third party user gets the image including scene content information, terminal namely can the scene characteristic that comprises of this image of automatic acquisition by resolving this scene content information, according to these scene characteristic, image is classified according to vision content, such as, be divided into different classifications according to landscape, portrait, animal, food, weather, environment.
Thus the present embodiment utilizes the scene content information of image, automatically classifies to image, provide a kind of new Images Classification mode.
As shown in Figure 8, propose image processing method of the present invention 3rd embodiment, said method comprising the steps of:
S31, utilize large data to carry out degree of depth study, training can the scene Recognition parameter of scene characteristic of resolution image.
S32, according to preset scene Recognition parameter, scene Recognition is carried out to image, the scene content information of synthetic image.
S33, by scene content information write image file attribute in.
In the present embodiment, step S31-S33 is identical with the step S21-S23 in the second embodiment respectively, does not repeat them here.
S34, when issue image time, according to scene content information generate annotation information.
The present embodiment utilizes scene content information to carry out annotation process to image.Concrete, when user issues image, explain the content of this image without the need to the manual input characters of user, the scene content information of this image of terminal automatic acquisition, and according to this scene content Automatic generation of information annotation information, to explain the particular content of this image.
Such as, in social application scenarios, user have taken photo and when social software upload photo, content of shooting need not be explained, after photo upload, terminal generates annotation information automatically, makes an explanation to the content of image, other users can recognize what the content inside this photo is when checking this photo easily, and the other guide relevance on this content and network.Such as, in image, there is place's scenery, automatically can provide the annotation information such as relevant position, title, affiliated kind, uniqueness of this scenery according to the scene content information of image.
In addition, when third party user obtains the image comprising scene content information, and when issuing, terminal obtains the scene content information of this image, according to scene content Automatic generation of information annotation information.
The present embodiment utilizes the scene content information of image, automatically generates the particular content that annotation information carrys out interpretation of images, eliminate user and manually input while issue image, experiences for user provides a kind of new images share.
As shown in Figure 9, propose image processing method of the present invention 4th embodiment, said method comprising the steps of:
S41, utilize large data to carry out degree of depth study, training can the scene Recognition parameter of scene characteristic of resolution image.
S42, according to preset scene Recognition parameter, scene Recognition is carried out to image, the scene content information of synthetic image.
S43, by scene content information write image file attribute in.
In the present embodiment, step S41-S43 is identical with the step S21-S23 in the second embodiment respectively, does not repeat them here.
S44, according to scene content information, process is optimized to image.
The present embodiment utilizes scene content information to be optimized process to image, mainly adjusts the color of image, strengthens the visual effect of image.Concrete, when being optimized process to image, the scene content information of terminal automatic acquisition image, the scene content information according to image takes different optimisation strategy, and the content for zones of different in image carries out optimization in various degree.Such as, when comprising sky in scene content information display image or/and meadow time, then automatic sky is become more blue, shows azure effect, meadow is become greener, show glossy and green sensation, make content optimization in image to optimal effectiveness; When scene content information shows the It is gloomy in image, dusky weather background can be transformed into the weather background that sunlight is bright, lower lighted areas is strengthened etc.
In addition, when third party user or developer obtain the image comprising scene content information, and when carrying out optimization process, the scene content information of this image of terminal automatic acquisition, is optimized process automatically according to scene content information.
In certain embodiments, when being optimized process to image, developer also can check image attributes, and obtain the scene content information of image, the scene content information by image is optimized process to image.
The present embodiment utilizes the scene content information of image to be automatically optimized process to image, makes optimization process more targetedly with more accurate, enhances the visual effect of image.
Be to be understood that; except the image procossing mode utilizing the scene content information of image to carry out that previous embodiment is enumerated; the scene content information of image can also be utilized image to be carried out to the process of other side, and the present invention is not restricted this, in like manner all in protection scope of the present invention.
Image processing method of the present invention, can also be applied to the immobile terminal equipment such as PC.
The present invention further provides a kind of image processing apparatus, be applied to aforementioned mobile terminal.Now based on above-mentioned mobile terminal hardware configuration and communication system, each embodiment of image processing apparatus of the present invention is proposed.
See Figure 10, propose image processing apparatus first embodiment of the present invention, described device comprises with lower module:
Scene Recognition module: for carrying out scene Recognition according to preset scene Recognition parameter to image, the scene content information of synthetic image.
Concrete, when after terminal taking image or after obtaining an image from outside, scene Recognition module utilizes degree of depth learning art to carry out scene Recognition according to scene Recognition parameter to this image immediately, generate the scene content information of this image, namely this scene content information describe the Word message of the scene characteristic of this image.Wherein, obtain image from outside, comprise and download image from network, or receive the image of external unit transmission.
Described scene Recognition parameter can tell the scene characteristic of image, and scene Recognition parameter is directly obtain from outside and be stored in local parameter.
Described scene content information comprises the content tab of image, the coordinate position, content associated information etc. of pixel, in other words, at least comprise the object in image, can also comprise the location layout, attributive character etc. of image background, each object, described attributive character is as color, kind, shape and related information etc.
As shown in Figure 4, utilize degree of depth learning art and according to scene Recognition parameter, scene Recognition carried out to Fig. 4, detect that the object in image is strawberry, and the attributive character information such as the color of strawberry, affiliated food species, nutrient health, finally generate following scene content information: strawberry, food, organic plant, fruit, berry, nutrition, health, fresh, red, grass green, feature etc.
As shown in Figure 5, utilize degree of depth learning art and according to scene Recognition parameter, scene Recognition carried out to Fig. 5, detect that image upper right quarter is blue sky, lower left quarter is dome rock russet, centre is green trees, entirety is shown as a dry view, thus generates following scene content information: a slice background is bronzing rock dome and blue sky, and prospect is the dry view of the little grass of some greeneries, shrub and light brown.
Writing module: for scene content information is write in the file attribute of image.
Thus, by carrying out scene Recognition to image, generating scene content information is also written to this series of processes in the file attribute of image, make in the file attribute of image, not only to comprise the acquisition parameters information such as picture altitude, time shutter, data bits, shooting geographic position, also comprise the scene content information of image, for image adds a kind of new attribute.After terminal user or third party user obtain image, without the need to opening image, directly check that the file attribute of image just can obtain the scene content information of image, make user can obtain abundanter image information according to the file attribute of image, facilitate user's fast browsing or screening image.
In addition, terminal user or third party user can also utilize the scene content information of image to be further processed image.
See Figure 11, image processing apparatus second embodiment of the present invention is proposed, the difference of the present embodiment and the first embodiment is the increase in a degree of depth study module, described degree of depth study module is used for: utilize large data to carry out degree of depth study, and training can the scene Recognition parameter of scene characteristic of resolution image.
Degree of depth study is one of most important breakthrough of obtaining of artificial intelligence field nearly ten years, and it all achieves immense success at speech recognition, natural language processing, computer vision, image and the numerous areas such as video analysis, multimedia.Degree of depth study is a kind of method of in machine learning field, pattern (sound, image etc.) being carried out to modeling, it is also a kind of probability model of Corpus--based Method, after modeling is carried out to various pattern, just can identify various pattern, such as when the pattern of modeling is sound, this identification just can be understood as speech recognition.
The concept of degree of depth study comes from the research of artificial neural network, and the multilayer perceptron containing many hidden layers is exactly a kind of degree of depth study structure.Degree of depth study forms more abstract high level by combination low-level feature and represents attribute classification or feature, to find that the distributed nature of data represents.In order to carry out the identification of certain pattern, first common way is in some way, extracts the feature in this pattern.The extracting mode of this feature is sometimes engineer or specifies, and is sometimes under given relatively multidata prerequisite, oneself is summed up out by computing machine.Degree of depth study proposes a kind of method allowing computing machine automatic learning exit pattern feature, and has been dissolved into by feature learning in the process of Modling model, thus decreases the incompleteness that artificial design feature causes.
Although degree of depth study module can the feature of mode of learning automatically, and can reach good accuracy of identification, the prerequisite of its work is, the data of magnitude that user can provide " quite large ".That is under the application scenarios that can only provide limited data volume, degree of depth study module just can not carry out agonic have estimated to the rule of data, therefore may be not so good as some existing simple algorithms on recognition effect.
At present along with the rise of large data, the terminal device voice that particularly mobile terminal is a large amount of and view data are that degree of depth study provides Data Source endlessly, specific in image scene identification, first degree of depth study module utilizes large data platform to collect the characteristic of different scene, then these these characteristics are input in convolutional neural networks, carry out the various features of the different scene of automatic learning, train the nonlinear characteristic combination parameter of these different scenes of classification, i.e. scene identification parameter, in concrete scene Recognition, these scene Recognition parameters just can be utilized afterwards to go to identify different scenes, tell the different background in scene, the attributive character of object and object.
As shown in Figure 7, in degree of depth learning process, convolutional neural networks is to the Classification and Identification process of scene content: first, input picture; Then, subregion is extracted; Then, convolutional neural networks feature is calculated; Finally, territorial classification is carried out.
The present embodiment can automatically utilize large data to carry out degree of depth study by terminal and obtain scene Recognition parameter.
See Figure 12, propose image processing apparatus of the present invention 3rd embodiment, the difference of the present embodiment and the second embodiment is the increase in an image processing module, and described image processing module is used for: process image according to scene content information.
Concrete, as shown in figure 13, image processing module comprises taxon, annotation unit and optimization process unit, wherein:
Taxon: for classifying to image according to scene content information.
Concrete, after writing module is by the file attribute of the scene content information of image write image, taxon goes out according to scene content information analysis the scene characteristic that this image comprises immediately, according to these features, image is classified according to vision content, such as, be divided into different classifications according to landscape, portrait, animal, food, weather, environment.
For example, according to the scene content information of Fig. 4, can be food, fruits, strawberry class etc. by Images Classification; According to the scene content information of Fig. 5, can be landscape class, dry view class etc. by Images Classification.
In addition, when terminal gets the image including scene content information, taxon namely can the scene characteristic that comprises of this image of automatic acquisition by resolving this scene content information, according to these scene characteristic, image is classified according to vision content, such as, be divided into different classifications according to landscape, portrait, animal, food, weather, environment.
Annotation unit: for when issuing image, generate annotation information according to scene content information.
Concrete, when user issues image, explain the content of this image without the need to the manual input characters of user, the scene content information of this image of annotation unit automatic acquisition, and according to this scene content Automatic generation of information annotation information, to explain the particular content of this image.
Such as, in social application scenarios, user have taken photo and when social software upload photo, content of shooting need not be explained, after photo upload, annotation unit generates annotation information automatically, makes an explanation to the content of image, other users can recognize what the content inside this photo is when checking this photo easily, and the other guide relevance on this content and network.Such as, in image, there is place's scenery, automatically can provide the annotation information such as relevant position, title, affiliated kind, uniqueness of this scenery according to the scene content information of image.
Optimize unit: for being optimized process according to scene content information to image.Mainly the color of image is adjusted, strengthen the visual effect of image
Concrete, when being optimized process to image, optimize the scene content information of unit automatic acquisition image, the scene content information according to image takes different optimisation strategy, and the content for zones of different in image carries out optimization in various degree.Such as, when comprising sky in scene content information display image or/and meadow time, optimize that unit is then automatic that sky is become more blue, show azure effect, meadow is become greener, show glossy and green sensation, make content optimization in image to optimal effectiveness; When scene content information shows the It is gloomy in image, dusky weather background can be transformed into the weather background that sunlight is bright, lower lighted areas is strengthened etc.
In the present embodiment, utilize the scene content information of image automatically to classify to image, provide a kind of new Images Classification mode; Utilize the scene content information of image, while issue image, automatically generate the particular content that annotation information carrys out interpretation of images, eliminate user and manually input, experience for user provides a kind of new images share; Utilize the scene content information of image to be automatically optimized process to image, make optimization process more targetedly with more accurate, enhance the visual effect of image.
In certain embodiments, one of them in taxon, annotation unit and optimization unit or two can also be only included in image processing module.
In certain embodiments, also can omit degree of depth study module, as the first embodiment, obtain scene Recognition parameter from outside and be stored in this locality.
Image processing apparatus of the present invention, can also be applied to the immobile terminal equipment such as PC.
It should be noted that, in this article, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or device and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or device.When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the device comprising this key element and also there is other identical element.
The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be well understood to the mode that above-described embodiment method can add required general hardware platform by software and realize, hardware can certainly be passed through, but in a lot of situation, the former is better embodiment.Based on such understanding, technical scheme of the present invention can embody with the form of software product the part that prior art contributes in essence in other words, this computer software product is stored in a storage medium (as ROM/RAM, magnetic disc, CD), comprising some instructions in order to make a station terminal equipment (can be mobile phone, computing machine, server, air conditioner, or the network equipment etc.) perform method described in each embodiment of the present invention.
These are only the preferred embodiments of the present invention; not thereby the scope of the claims of the present invention is limited; every utilize instructions of the present invention and accompanying drawing content to do equivalent structure or equivalent flow process conversion; or be directly or indirectly used in other relevant technical fields, be all in like manner included in scope of patent protection of the present invention.

Claims (14)

1. an image processing apparatus, is characterized in that, comprising:
Scene Recognition module, for carrying out scene Recognition according to preset scene Recognition parameter to image, generates the scene content information of described image, and described scene content information is the Word message of the scene characteristic describing described image;
Writing module, for writing described scene content information in the file attribute of described image.
2. image processing apparatus according to claim 1, is characterized in that, also comprises image processing module, and described image processing module is used for: process described image according to described scene content information.
3. image processing apparatus according to claim 2, is characterized in that, described image processing module comprises taxon, and described taxon is used for: classify to described image according to described scene content information.
4. image processing apparatus according to claim 2, is characterized in that, described image processing module comprises annotation unit, and described annotation unit is used for: when issuing described image, generates annotation information according to described scene content information.
5. image processing apparatus according to claim 2, is characterized in that, described image processing module comprises optimization unit, and described optimization unit is used for: be optimized process according to described scene content information to described image.
6. image processing apparatus according to claim 1, is characterized in that, described scene Recognition module is used for: when shooting image or obtain an image from outside after, carry out scene Recognition immediately to described image.
7. the image processing apparatus according to any one of claim 1-6, it is characterized in that, described image processing apparatus also comprises degree of depth study module, and described degree of depth study module is used for: utilize large data to carry out degree of depth study, and training can the scene Recognition parameter of scene characteristic of resolution image.
8. an image processing method, is characterized in that, comprises step:
Carry out scene Recognition according to preset scene Recognition parameter to image, generate the scene content information of described image, described scene content information is the Word message of the scene characteristic describing described image;
Described scene content information is write in the file attribute of described image.
9. image processing method according to claim 8, is characterized in that, also comprises after the described step described scene content information write in the file attribute of described image:
According to described scene content information, described image is processed.
10. image processing method according to claim 9, is characterized in that, describedly carries out process according to described scene content information to described image and comprises: classify to described image according to described scene content information.
11. image processing methods according to claim 9, is characterized in that, describedly carry out process according to described scene content information to described image and comprise: when issuing described image, generate annotation information according to described scene content information.
12. image processing methods according to claim 9, is characterized in that, describedly carry out process according to described scene content information to described image and comprise: be optimized process according to described scene content information to described image.
13. image processing methods according to claim 8, is characterized in that, described method also comprises: when shooting image or obtain an image from outside after, carry out scene Recognition immediately to described image.
14. image processing methods according to Claim 8 described in-13 any one, it is characterized in that, the scene Recognition parameter that described basis is preset also comprises before image being carried out to the step of scene Recognition: utilize large data to carry out degree of depth study, and training can the scene Recognition parameter of scene characteristic of resolution image.
CN201510644164.2A 2015-09-30 2015-09-30 Image processing device and method Pending CN105302872A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201510644164.2A CN105302872A (en) 2015-09-30 2015-09-30 Image processing device and method
PCT/CN2016/099865 WO2017054676A1 (en) 2015-09-30 2016-09-23 Image processing device, terminal, and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510644164.2A CN105302872A (en) 2015-09-30 2015-09-30 Image processing device and method

Publications (1)

Publication Number Publication Date
CN105302872A true CN105302872A (en) 2016-02-03

Family

ID=55200142

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510644164.2A Pending CN105302872A (en) 2015-09-30 2015-09-30 Image processing device and method

Country Status (2)

Country Link
CN (1) CN105302872A (en)
WO (1) WO2017054676A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156310A (en) * 2016-06-30 2016-11-23 努比亚技术有限公司 A kind of picture processing apparatus and method
WO2017054676A1 (en) * 2015-09-30 2017-04-06 努比亚技术有限公司 Image processing device, terminal, and method
CN106991427A (en) * 2017-02-10 2017-07-28 海尔优家智能科技(北京)有限公司 The recognition methods of fruits and vegetables freshness and device
CN107808125A (en) * 2017-09-30 2018-03-16 珠海格力电器股份有限公司 image sharing method and device
CN108462876A (en) * 2018-01-19 2018-08-28 福州瑞芯微电子股份有限公司 A kind of video decoding optimization adjusting apparatus and method
CN108629767A (en) * 2018-04-28 2018-10-09 Oppo广东移动通信有限公司 A kind of method, device and mobile terminal of scene detection
CN108683826A (en) * 2018-05-15 2018-10-19 腾讯科技(深圳)有限公司 Video data handling procedure, device, computer equipment and storage medium
WO2019034070A1 (en) * 2017-08-18 2019-02-21 广州极飞科技有限公司 Method and apparatus for monitoring plant health state
WO2019072057A1 (en) * 2017-10-13 2019-04-18 华为技术有限公司 Image signal processing method, apparatus and device
CN109815462A (en) * 2018-12-10 2019-05-28 维沃移动通信有限公司 A kind of document creation method and terminal device
WO2019109801A1 (en) * 2017-12-06 2019-06-13 Oppo广东移动通信有限公司 Method and device for adjusting photographing parameter, storage medium, and mobile terminal
CN110619251A (en) * 2018-06-19 2019-12-27 Oppo广东移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN110717475A (en) * 2019-10-18 2020-01-21 北京汽车集团有限公司 Automatic driving scene classification method and system
CN111566639A (en) * 2018-02-09 2020-08-21 华为技术有限公司 Image classification method and device
CN112287790A (en) * 2020-10-20 2021-01-29 北京字跳网络技术有限公司 Image processing method, image processing device, storage medium and electronic equipment

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255826B (en) * 2018-10-11 2023-11-21 平安科技(深圳)有限公司 Chinese training image generation method, device, computer equipment and storage medium
CN111027622B (en) * 2019-12-09 2023-12-08 Oppo广东移动通信有限公司 Picture label generation method, device, computer equipment and storage medium
CN114677691B (en) * 2022-04-06 2023-10-03 北京百度网讯科技有限公司 Text recognition method, device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050105776A1 (en) * 2003-11-13 2005-05-19 Eastman Kodak Company Method for semantic scene classification using camera metadata and content-based cues
CN102422286A (en) * 2009-03-11 2012-04-18 香港浸会大学 Automatic and semi-automatic image classification, annotation and tagging through the use of image acquisition parameters and metadata

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104572905B (en) * 2014-12-26 2018-09-04 小米科技有限责任公司 Print reference creation method, photo searching method and device
CN105302872A (en) * 2015-09-30 2016-02-03 努比亚技术有限公司 Image processing device and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050105776A1 (en) * 2003-11-13 2005-05-19 Eastman Kodak Company Method for semantic scene classification using camera metadata and content-based cues
CN102422286A (en) * 2009-03-11 2012-04-18 香港浸会大学 Automatic and semi-automatic image classification, annotation and tagging through the use of image acquisition parameters and metadata

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
牛杰等: "一种融合全局及显著性区域特征的室内场景识别方法", 《机器人》 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017054676A1 (en) * 2015-09-30 2017-04-06 努比亚技术有限公司 Image processing device, terminal, and method
CN106156310A (en) * 2016-06-30 2016-11-23 努比亚技术有限公司 A kind of picture processing apparatus and method
CN106991427A (en) * 2017-02-10 2017-07-28 海尔优家智能科技(北京)有限公司 The recognition methods of fruits and vegetables freshness and device
WO2019034070A1 (en) * 2017-08-18 2019-02-21 广州极飞科技有限公司 Method and apparatus for monitoring plant health state
US11301986B2 (en) 2017-08-18 2022-04-12 Guangzhou Xaircraft Technology Co., Ltd Method and apparatus for monitoring plant health state
CN109406412A (en) * 2017-08-18 2019-03-01 广州极飞科技有限公司 A kind of plant health method for monitoring state and device
CN107808125A (en) * 2017-09-30 2018-03-16 珠海格力电器股份有限公司 image sharing method and device
US11430209B2 (en) 2017-10-13 2022-08-30 Huawei Technologies Co., Ltd. Image signal processing method, apparatus, and device
WO2019072057A1 (en) * 2017-10-13 2019-04-18 华为技术有限公司 Image signal processing method, apparatus and device
WO2019109801A1 (en) * 2017-12-06 2019-06-13 Oppo广东移动通信有限公司 Method and device for adjusting photographing parameter, storage medium, and mobile terminal
CN108462876A (en) * 2018-01-19 2018-08-28 福州瑞芯微电子股份有限公司 A kind of video decoding optimization adjusting apparatus and method
CN108462876B (en) * 2018-01-19 2021-01-26 瑞芯微电子股份有限公司 Video decoding optimization adjustment device and method
CN111566639A (en) * 2018-02-09 2020-08-21 华为技术有限公司 Image classification method and device
CN108629767A (en) * 2018-04-28 2018-10-09 Oppo广东移动通信有限公司 A kind of method, device and mobile terminal of scene detection
CN108683826A (en) * 2018-05-15 2018-10-19 腾讯科技(深圳)有限公司 Video data handling procedure, device, computer equipment and storage medium
CN110619251A (en) * 2018-06-19 2019-12-27 Oppo广东移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN110619251B (en) * 2018-06-19 2022-06-10 Oppo广东移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN109815462A (en) * 2018-12-10 2019-05-28 维沃移动通信有限公司 A kind of document creation method and terminal device
CN109815462B (en) * 2018-12-10 2023-12-01 维沃移动通信有限公司 Text generation method and terminal equipment
CN110717475A (en) * 2019-10-18 2020-01-21 北京汽车集团有限公司 Automatic driving scene classification method and system
CN112287790A (en) * 2020-10-20 2021-01-29 北京字跳网络技术有限公司 Image processing method, image processing device, storage medium and electronic equipment

Also Published As

Publication number Publication date
WO2017054676A1 (en) 2017-04-06

Similar Documents

Publication Publication Date Title
CN105302872A (en) Image processing device and method
CN106156310A (en) A kind of picture processing apparatus and method
CN106603823A (en) Content sharing method and device and terminal
CN104902212A (en) Video communication method and apparatus
CN104917881A (en) Multi-mode mobile terminal and implementation method thereof
CN105224925A (en) Video process apparatus, method and mobile terminal
CN106686301A (en) Picture shooting method and device
CN106973408A (en) Antenna allocation method and device
CN105100491A (en) Device and method for processing photo
CN104679890B (en) Picture method for pushing and device
CN104967802A (en) Mobile terminal, recording method of screen multiple areas and recording device of screen multiple areas
CN105303398A (en) Information display method and system
CN106356065A (en) Mobile terminal and voice conversion method
CN105049637A (en) Device and method for controlling instant communication
CN106682964A (en) Method and apparatus for determining application label
CN105933529A (en) Shooting picture display method and device
CN106372607A (en) Method for reading pictures from videos and mobile terminal
CN106909681A (en) A kind of information processing method and its device
CN106851113A (en) A kind of photographic method and mobile terminal based on dual camera
CN105227829A (en) Preview picture device and its implementation
CN105278860A (en) Mobile terminal image uploading device and method
CN105263195A (en) Data transmission device and method
CN105099701A (en) Terminal and terminal authentication method
CN106708804A (en) Method and device for generating word vectors
CN104866095A (en) Mobile terminal, and method and apparatus for managing desktop thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160203