CN107657638A - A kind of image processing method, device and computer-readable recording medium - Google Patents

A kind of image processing method, device and computer-readable recording medium Download PDF

Info

Publication number
CN107657638A
CN107657638A CN201711066232.7A CN201711066232A CN107657638A CN 107657638 A CN107657638 A CN 107657638A CN 201711066232 A CN201711066232 A CN 201711066232A CN 107657638 A CN107657638 A CN 107657638A
Authority
CN
China
Prior art keywords
image
subject image
status information
image element
carried out
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711066232.7A
Other languages
Chinese (zh)
Inventor
兰向宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201711066232.7A priority Critical patent/CN107657638A/en
Publication of CN107657638A publication Critical patent/CN107657638A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of image processing method, including:Image recognition is carried out to pending image, identifies the subject image element of the pending image;State analysis is carried out to the subject image element and determines the status information of the subject image element, and determines action model corresponding to the status information;Dynamic image corresponding to the image procossing generation pending image is carried out to the subject image element according to the action model.The embodiment of the present invention also provides the device and computer-readable recording medium for realizing the above method.The present invention can bring lively viewing experience for user while memory space is saved.

Description

A kind of image processing method, device and computer-readable recording medium
Technical field
The present invention relates to image processing techniques, more particularly to a kind of image processing method, device and computer-readable storage Medium.
Background technology
At present, when taking a picture, different shootings is carried out according to the shooting instruction under different mode, specifically, in video When shooting instruction is received under pattern, start the time of shooting instruction to receive as initial time, clapped with the end received The time for taking the photograph instruction is the end time, obtains the video in initial time and this period end time section;Under picture mode, When receiving shooting instruction, obtain receiving image at the time of shooting instruction.Here, the video being made up of many two field pictures is made Can be that user brings more preferable viewing experience for Dynamic Announce, but the storage of video needs to take substantial amounts of storage resource, figure The storage of picture can save memory space, but bad as the image of static state display, the viewing experience of user.
Can be user while memory space is saved with next life therefore it provides a kind of technical scheme of image procossing Dynamic viewing experience.
The content of the invention
In view of this, the embodiment of the present invention provides a kind of image processing method, device and computer-readable recording medium, energy It is enough that lively viewing experience is brought for user while memory space is saved.
What the technical scheme of the embodiment of the present invention was realized in:
On the one hand, the embodiment of the present invention provides a kind of image processing method, and image recognition, identification are carried out to pending image Go out the subject image element of the pending image;State analysis is carried out to the subject image element and determines the target image The status information of element, and determine action model corresponding to the status information;According to the action model to the target figure Pixel element carries out image procossing and generates dynamic image corresponding to the pending image.
On the other hand, the embodiment of the present invention provides the image processing apparatus for realizing above-mentioned image processing method, including:Storage Device, processor and the computer program that can be run on the memory and on the processor is stored in, the processor is held The row computer program, to realize:Image recognition is carried out to pending image, identifies the target figure of the pending image Pixel element;State analysis is carried out to the subject image element and determines the status information of the subject image element, and determines institute State action model corresponding to status information;Image procossing generation institute is carried out to the subject image element according to the action model State dynamic image corresponding to pending image.
On the other hand, there is provided a kind of computer-readable recording medium for realizing above-mentioned image processing method.
Image processing method, device and computer-readable recording medium provided in an embodiment of the present invention, to static state display The subject image element of image carries out state analysis, action model corresponding to dbjective state element is determined, according to the action of determination Simulated target pictorial element is handled, and is dynamic image by subject image element processing, here, to the image of static state display Subject image element carries out image procossing, so as to be dynamic display area by the processing of the subject image element of the image of static state display Domain, and just for treating partly to carry out dynamic display processing corresponding to the subject image element of legend image, by subject image element Outside pictorial element remain static state display, while storage occupancy is not increased, can realize Dynamic Announce, improve and use The viewing experience at family.
Brief description of the drawings
Fig. 1 is the hardware architecture diagram for realizing each optional mobile terminal of embodiment one of the present invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is the schematic flow sheet of the image processing method in the embodiment of the present invention one;
Fig. 4 is the schematic diagram of pictorial element in the embodiment of the present invention one;
Fig. 5 is that the 3 D stereo in the embodiment of the present invention one analyzes schematic diagram;
Fig. 6 is the schematic diagram of the subject image element in the embodiment of the present invention one;
Fig. 7 is the schematic flow sheet of the image processing method in the embodiment of the present invention two;
Fig. 8 is the subject image element schematic diagram in the different associated images in the embodiment of the present invention two;
Fig. 9 is the schematic flow sheet of the image processing method in the embodiment of the present invention three;
Figure 10 is the structural representation of the image processing apparatus in the embodiment of the present invention four.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In follow-up description, the suffix using such as " module ", " part " or " unit " for representing element is only Be advantageous to the explanation of the present invention, itself there is no a specific meaning.Therefore, " module ", " part " or " unit " can mix Ground uses.
Terminal can be implemented in a variety of manners.For example, the terminal described in the present invention can include such as mobile phone, flat board Computer, notebook computer, palm PC, personal digital assistant (Personal Digital Assistant, PDA), portable Media player (Portable Media Player, PMP), guider, wearable device, Intelligent bracelet, pedometer etc. move Dynamic terminal, and the fixed terminal such as digital TV, desktop computer.
It will be illustrated in subsequent descriptions by taking mobile terminal as an example, it will be appreciated by those skilled in the art that except special For moving outside purpose element, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Referring to Fig. 1, its hardware architecture diagram for a kind of mobile terminal of each embodiment of the realization present invention, the shifting Dynamic terminal 100 can include:RF (Radio Frequency, radio frequency) unit 101, WiFi module 102, audio output unit 103rd, A/V (audio/video) input block 104, sensor 105, display unit 106, user input unit 107, interface unit 108th, the part such as memory 109, processor 110 and power supply 111.It will be understood by those skilled in the art that shown in Fig. 1 Mobile terminal structure does not form the restriction to mobile terminal, and mobile terminal can be included than illustrating more or less parts, Either combine some parts or different parts arrangement.
The all parts of mobile terminal are specifically introduced with reference to Fig. 1:
Radio frequency unit 101 can be used for receiving and sending messages or communication process in, the reception and transmission of signal, specifically, by base station Downlink information receive after, handled to processor 110;In addition, up data are sent to base station.Generally, radio frequency unit 101 Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, penetrate Frequency unit 101 can also be communicated by radio communication with network and other equipment.Above-mentioned radio communication can use any communication Standard or agreement, including but not limited to GSM (Global System of Mobile communication, global system for mobile telecommunications System), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (Code Division Multiple Access 2000, CDMA 2000), WCDMA (Wideband Code Division Multiple Access, WCDMA), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access, TD SDMA), FDD-LTE (Frequency Division Duplexing-Long Term Evolution, FDD Long Term Evolution) and TDD-LTE (Time Division Duplexing-Long Term Evolution, time division duplex Long Term Evolution) etc..
WiFi belongs to short range wireless transmission technology, and mobile terminal can help user to receive and dispatch electricity by WiFi module 102 Sub- mail, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and accessed.Although Fig. 1 shows Go out WiFi module 102, but it is understood that, it is simultaneously not belonging to must be configured into for mobile terminal, completely can be according to need To be omitted in the essential scope for do not change invention.
Audio output unit 103 can be in call signal reception pattern, call mode, record mould in mobile terminal 100 When under the isotypes such as formula, speech recognition mode, broadcast reception mode, by radio frequency unit 101 or WiFi module 102 it is receiving or It is sound that the voice data stored in memory 109, which is converted into audio signal and exported,.Moreover, audio output unit 103 The audio output related to the specific function that mobile terminal 100 performs can also be provided (for example, call signal receives sound, disappeared Breath receives sound etc.).Audio output unit 103 can include loudspeaker, buzzer etc..
A/V input blocks 104 are used to receive audio or video signal.A/V input blocks 104 can include graphics processor (Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode Or the static images or the view data of video obtained in image capture mode by image capture apparatus (such as camera) are carried out Reason.Picture frame after processing may be displayed on display unit 106.Picture frame after the processing of graphics processor 1041 can be deposited Storage is transmitted in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.Mike Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042 Quiet down sound (voice data), and can be voice data by such acoustic processing.Audio (voice) data after processing can To be converted to the form output that mobile communication base station can be sent to via radio frequency unit 101 in the case of telephone calling model. Microphone 1042 can implement various types of noises and eliminate (or suppression) algorithm to eliminate (or suppression) in reception and send sound Caused noise or interference during frequency signal.
Mobile terminal 100 also includes at least one sensor 105, such as optical sensor, motion sensor, color temperature sensor And other sensors.Specifically, optical sensor includes ambient light sensor and proximity transducer, wherein, ambient light sensor The brightness of display panel 1061 can be adjusted according to the light and shade of ambient light, proximity transducer can be moved in mobile terminal 100 When in one's ear, display panel 1061 and/or backlight are closed.As one kind of motion sensor, accelerometer sensor can detect each The size of (generally three axles) acceleration, can detect that size and the direction of gravity on direction when static, available for identifying mobile phone The application (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating) of posture, Vibration identification correlation function (such as meter step Device, percussion) etc.;Color temperature sensor is used for the colour temperature for detecting ambient light;The fingerprint sensor that can also configure as mobile phone, pressure pass Other sensings such as sensor, iris sensor, molecule sensor, gyroscope, barometer, hygrometer, thermometer, infrared ray sensor Device, it will not be repeated here.
Display unit 106 is used for the information for showing the information inputted by user or being supplied to user.Display unit 106 can wrap Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used Forms such as (Organic Light-Emitting Diode, OLED) configures display panel 1061.
User input unit 107 can be used for the numeral or character information for receiving input, and produce the use with mobile terminal The key signals input that family is set and function control is relevant.Specifically, user input unit 107 may include contact panel 1071 with And other input equipments 1072.Contact panel 1071, also referred to as touch-screen, collect touch operation of the user on or near it (for example user uses any suitable objects or annex such as finger, stylus on contact panel 1071 or in contact panel 1071 Neighbouring operation), and corresponding attachment means are driven according to formula set in advance.Contact panel 1071 may include touch detection Two parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation band The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it Contact coordinate is converted into, then gives processor 110, and the order sent of reception processing device 110 and can be performed.In addition, can To realize contact panel 1071 using polytypes such as resistance-type, condenser type, infrared ray and surface acoustic waves.Except contact panel 1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 can wrap Include but be not limited to physical keyboard, in function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc. One or more, do not limit herein specifically.
Further, contact panel 1071 can cover display panel 1061, detect thereon when contact panel 1071 or After neighbouring touch operation, processor 110 is sent to determine the type of touch event, is followed by subsequent processing device 110 according to touch thing The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, contact panel 1071 and display panel 1061 be the part independent as two to realize the input of mobile terminal and output function, but in certain embodiments, can Input and the output function of mobile terminal are realized so that contact panel 1071 and display panel 1061 is integrated, is not done herein specifically Limit.
Interface unit 108 is connected the interface that can pass through as at least one external device (ED) with mobile terminal 100.For example, External device (ED) can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothing Line FPDP, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in mobile terminal 100 or can be with For transmitting data between mobile terminal 100 and external device (ED).
Memory 109 can be used for storage software program and various data.Memory 109 can mainly include storing program area And storage data field, wherein, storing program area can storage program area, application program (such as the sound needed at least one function Sound playing function, image player function etc.) etc.;Storage data field can store according to mobile phone use created data (such as Voice data, phone directory etc.) etc..In addition, memory 109 can include high-speed random access memory, can also include non-easy The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of mobile terminal, utilizes each of various interfaces and the whole mobile terminal of connection Individual part, by running or performing the software program and/or module that are stored in memory 109, and call and be stored in storage Data in device 109, the various functions and processing data of mobile terminal are performed, so as to carry out integral monitoring to mobile terminal.Place Reason device 110 may include one or more processing units;Preferably, processor 110 can integrate application processor and modulatedemodulate is mediated Device is managed, wherein, application processor mainly handles operating system, user interface and application program etc., and modem processor is main Handle radio communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Mobile terminal 100 can also include the power supply 111 (such as battery) to all parts power supply, it is preferred that power supply 111 Can be logically contiguous by power-supply management system and processor 110, so as to realize management charging by power-supply management system, put The function such as electricity and power managed.
Although Fig. 1 is not shown, mobile terminal 100 can also will not be repeated here including bluetooth module etc..
For the ease of understanding the embodiment of the present invention, the communications network system being based on below to the mobile terminal of the present invention enters Row description.
Referring to Fig. 2, Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention, the communication network system Unite as the LTE system of universal mobile communications technology, the UE that the LTE system includes communicating connection successively (User Equipment, is used Family equipment) 201, E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, evolved UMTS lands Ground wireless access network) 202, EPC (Evolved Packet Core, evolved packet-based core networks) 203 and operator IP operation 204。
Specifically, UE201 can be above-mentioned terminal 100, and here is omitted.
E-UTRAN202 includes eNodeB2021 and other eNodeB2022 etc..Wherein, eNodeB2021 can be by returning Journey (backhaul) (such as X2 interface) is connected with other eNodeB2022, and eNodeB2021 is connected to EPC203, ENodeB2021 can provide UE201 to EPC203 access.
EPC203 can include MME (Mobility Management Entity, mobility management entity) 2031, HSS (Home Subscriber Server, home subscriber server) 2032, other MME2033, SGW (Serving Gate Way, Gateway) 2034, PGW (PDN Gate Way, grouped data network gateway) 2035 and PCRF (Policy and Charging Rules Function, policy and rate functional entity) 2036 etc..Wherein, MME2031 be processing UE201 and The control node of signaling between EPC203, there is provided carrying and connection management.HSS2032 is all to manage for providing some registers Such as the function of attaching position register (not shown) etc, and preserve some and used about service features, data rate etc. The special information in family.All customer data can be transmitted by SGW2034, and PGW2035 can provide UE 201 IP Address is distributed and other functions, and PCRF2036 is strategy and the charging control strategic decision-making of business data flow and IP bearing resources Point, it selects and provided available strategy and charging control decision-making with charge execution function unit (not shown) for strategy.
IP operation 204 can include internet, Intranet, IMS (IP Multimedia Subsystem, IP multimedia System) or other IP operations etc..
Although above-mentioned be described by taking LTE system as an example, those skilled in the art it is to be understood that the present invention not only Suitable for LTE system, be readily applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA with And following new network system etc., do not limit herein.
Based on above-mentioned mobile terminal hardware configuration and communications network system, each embodiment of the inventive method is proposed.
Embodiment one
Based on foregoing embodiment, the embodiment of the present invention provides a kind of image processing method, and this method is applied to terminal, should The function that method is realized can realize that certain program code can preserve by the processor caller code in terminal In computer-readable storage medium, it is seen then that the terminal comprises at least processor and storage medium.
Fig. 3 is the schematic flow sheet of the image processing method in the embodiment of the present invention one, as shown in figure 3, this method includes:
S301, image recognition is carried out to pending image, identify the subject image element of the pending image;
Sent out when user collects image or receive network side by meanss of communication such as chat applications by camera During the image sent, in display image, using the image as pending image, carried out after carrying out image procossing to pending image Display.Here, to the specific source of image without limiting.
In the display interface of terminal, it is possible to provide the display control switch including showing option, the display control switch show Show that option includes:Static state display and Dynamic Announce.When display control switch is to open, corresponding Dynamic Announce, when display control is opened During Guan Weiguan, corresponding static state display.Here, display control switch is used to control static state display and Dynamic Announce, specific control Mode processed is not limited.When selecting static state display by display control switch, the image of image or reception to collection is not Handled;When selecting Dynamic Announce by display control switch, the image of image or reception to collection is carried out at image Reason, to be shown as dynamic image.
Described to carry out image recognition to pending image, identifying the subject image element of pending image includes:To institute State pending image and carry out edge treated, obtain element outline corresponding to pictorial element in the pending image;Using described Element outline determines the subject image element in described image element.Specifically, when pending image is identified, place is treated Reason image carries out lookup edge treated, and in the pending image by searching edge treated, pending figure is shown by lines The element outline of pictorial element as in, now, can be according to elements such as the integralities, size and location of each element profile The related parameter of profile determines subject image element, such as:Can be by image primitive corresponding to the complete element outline of element outline Element is defined as subject image element, also can be by the position of the complete pictorial element of element outline and element outline at the center of image The pictorial element of position is defined as subject image element.As shown in figure 4, (a) is pending image, (b) is by searching edge The pending image of processing, in (b), element outline is sketched the contours of by lines, wherein, the corresponding image of each element outline Element, it is described dashed lines, including the personage of pictorial element 1, the sector region of pictorial element 2, the text of pictorial element 3 and image primitive Plain 4 leaves, when determining subject image element with the integrality of element outline, subject image element is pictorial element 1 and image Element 3.When determining subject image element with the position of element outline, subject image element is graphic element 1.
It should be noted that it is determined that during subject image element, it is determined that subject image element can be one or more.When , can be separate or interrelated between multiple subject image elements during including multiple subject image elements, when multiple target figures Interrelated between corresponding subject image element when the element outline of pixel element is associated, be mutually related target image member It is called usually as associated objects pictorial element.
It is determined that during subject image element, can also be determined by image parameters such as the color of pending image, light, the depth of field The pictorial element of pending image, the selection target pictorial element from pictorial element.Wherein, the selection target figure from pictorial element The alternative condition of pixel element can be set according to the actual requirements.
Here, pending image can be an image or the continuous image of multiple, and the continuous pending image of multiple is claimed For associated images.When pending image includes multiple associated images, the subject image element of every associated images can be determined respectively.
S302, the status information that state analysis determines the subject image element is carried out to the subject image element, and Determine action model corresponding to the status information;
After determining subject image element, pair subject image element determined carries out state analysis and determines subject image element Status information, here, status information characterize the state of subject image element, such as:When target image is character image, state Information may include the information such as the expression of character image, action.
Here, analysis is carried out to the state of subject image element and determines that the status information of the subject image element includes: 3 D stereo is carried out to the subject image element to analyze to obtain the status information of the subject image element, such as:Action letter Breath, including:The status information such as walk, run, sitting, raising one's hand, jumping, waving, saluting.
When the pending image includes at least two associated images, the state to the subject image element is carried out Analysis determines that the status information of the subject image element includes:The subject image element in associated images is carried out respectively State analysis, obtain sub- status information of the subject image element in each associated images;According to determining each sub- status information The status information of subject image element.
Here 3 D stereo analysis can pass through three as shown in figure 5, subject image element is character image as depicted The action of stereoscopic analysis character image is tieed up, can be according to the element outline of subject image element to element when 3 D stereo is analyzed Profile is analyzed, specifically, according to element outline alignment point, the company of the ledge based on central point and element outline Wiring, the length ratio of each connecting line and the angle of each connecting line are determined, angle here is three-dimensional viewpoin.Here three-dimensional is stood Body analysis can realize that specific modeling method can be configured according to the actual demand of user by 3 D stereo modeling, The embodiment of the present invention, it is only necessary to the state of subject image element is analyzed by 3 D stereo modeling.
After the status information of subject image element is determined, according to status information with storage action model matching, Determine action model corresponding to the status information.
Action model can be the model for characterizing dynamic action, including:Temporal information and the action message at each moment Deng.In the action model of matching status information, the corresponding relation that can be directly based upon status information and action model searches state Action model corresponding to information, status information can also be matched with the action label of action model, find including with shape The action model of the action label of state information matches.
In actual applications, action model is characterized as a specific action, including the three-dimensional architecture model established to things And each action that things shows, action model is the set of multiple single actions from the point of view of static, dynamic from the point of view of dynamically The dynamic action being made up of as model multiple single actions.
Status information is matched with the action label of action model, such as, when status information is runs, lookup action Label is the action model run, and now, the model of lookup is dynamic running model.
Here, status information is alternatively the information such as the color of subject image element, size, now, it is not necessary to passes through three-dimensional Stereoscopic analysis is that can obtain the status information of subject image element.Such as:As shown in fig. 6, when subject image element is sea, shape State information is the color and size on sea, and it is wave to analyze to obtain sea by 3 D stereo, is determined by status information Corresponding action model is Wave Model.Now, it is blue with subject image element identical based on size is not included in action model Color Wave Model, Wave Model can be modified to obtain the same target image of size by color, the size of subject image element Element identical blueness Wave Model.
It is described to target image member when the subject image element includes at least two associated objects pictorial elements Element carries out state analysis and determines the status information of the subject image element, and determines action model corresponding to the status information Including:State analysis is carried out to each associated objects pictorial element, determines the association status letter of each associated objects pictorial element Breath, and determine that each association status information is corresponding according to the relation between each association status information and each association status information Action model.
In embodiments of the present invention, when determining the status information of subject image element, image recognition can be combined and three-dimensional is stood The combination of body analysis determines the status information of subject image element, and according to corresponding to status information determines subject image element Action model.
In actual applications, action model can be obtained from terminal local, action model can be also obtained from network.When passing through Status information is modified after obtaining new action model to existing action model, can be on the basis of original action model The status information of amendment is added, to obtain revised action model.
S303, the image procossing generation pending image is carried out to the subject image element according to the action model Corresponding dynamic image.
It is determined that after action model, subject image element is handled according to the action model of determination, here, can be combined Action model corresponding to the status information and status information of target image carries out the dynamic image after image procossing.
With the example shown in Fig. 6, when subject image element is the sea of blueness, corresponding action model is Wave Model, Now, the wave of blueness can be obtained by the color and size of Wave Model combining target pictorial element, in the Dynamic Graph of generation As in, the wave of corresponding sea part display blueness.
After carrying out image procossing to pending image, the subject image element region corresponding part of pending image is dynamic Image, and the part beyond subject image element is original still image, realizes that partial dynamic is handled.
After dynamic image corresponding to the pending image is obtained after image procossing is carried out to pending image, when user is pre- Look at the pending image when, dynamic image corresponding to pending image can be directly displayed.
It should be noted that when user checks image, when showing multiple images simultaneously, every image all can be static aobvious Show, when user checks an image, the image's dynamic display.
Here, image processing method provided in an embodiment of the present invention, when storing pending image, pending image is entered Row image recognition, identify the subject image element of pending image;State analysis determination is carried out to the subject image element The status information of the subject image element, and determine action model corresponding to the status information;According to the action model Image procossing is carried out to the subject image element and generates dynamic image corresponding to the pending image, so as to aobvious according to static state The pending image shown obtains the dynamic image of Dynamic Announce, meets user when browsing, and the image browsed to is dynamic image; Meanwhile dynamic image is to carry out image to pending image by action model corresponding to the subject image element of pending image Obtained dynamic image is handled, so as to obtain dynamic image by the image procossing of the subregion to pending image, it is simultaneously Substantial amounts of storage resource is taken unlike video, so as to save storage resource while user's viewing experience is improved.Enter one Step, when it is determined that needing into the subject image element of Mobile state processing, each pictorial element is obtained by searching edge treated Element outline, the subject image element in pictorial element is determined according to element outline, so as to which accurately positioning is needed into Mobile state The image-region of processing, while image resource is saved, simplify the complexity of image procossing, improve the process performance of system.
Embodiment two
Based on foregoing embodiment, the embodiment of the present invention provides a kind of image processing method, and this method is applied to terminal, should The function that method is realized can realize that certain program code can preserve by the processor caller code in terminal In computer-readable storage medium, it is seen then that the terminal comprises at least processor and storage medium.
Fig. 7 is the schematic flow sheet of the image processing method in the embodiment of the present invention two, as shown in fig. 7, this method includes:
S701, image recognition is carried out to pending image, identify the subject image element of the pending image;
When pending image includes multiple associated images, image recognition is carried out to each associated images respectively, identified each The subject image element of associated images.Wherein, when multiple images are multiple continuous images for same reference object, multiple Image is associated images, that is to say, that associated images are multiple images continuous in time for same reference object.Than Such as:Pending image includes image 1, image 2 and image 3, wherein, image 1, image 2 and image 3 are for same reference object Continuous time point at time point 1, time point 2 and time 3 respectively corresponding to image, image 1, image 2 and image 3 are entered respectively When row image recognition obtains respective subject image element, identical target image member to image 1, image 2 and image 3 be present Element, but action of the subject image element in image 1, image 2 and image 3 may be different.
It should be noted that the subject image element in each associated images may include multiple subject image elements.
S702, include at least two associated images when the pending image, respectively to the target in associated images Pictorial element carries out state analysis, determines the sub- status information of the subject image element in each associated images;According to each son Status information determines the status information of the subject image element;
When by carrying out figure identification to each associated images, after the subject image element for identifying each associated images respectively, State analysis is carried out to the subject image element of each associated images respectively, determines sub- shape of the subject image element in each associated images State information, here, the status information of the subject image element of associated images is referred to as sub- status information, it is determined that each associated images Subject image element from after status information, sub- status information in the associated images of a subject image element determines should The status information of subject image element.
When the subject image element in each associated images carries out state analysis, can be stood by the three-dimensional in embodiment one Body analysis method is analyzed, and specific analysis method repeats no more here.Determine each subject image element in associated images Sub- status information after, the status information of subject image element is determined according to sub- status information, i.e. exist according to subject image element Static action in each associated images determines a continuous dynamic action.Such as:When associated images include image 1, the and of image 2 During image 3, image 1, image 2 and image 3 all include subject image element hand, hand in image 1, image 2, image 3 respectively such as (a) shown in Fig. 8, (b) and (c) are shown, and sub- status information corresponding to hand difference is sub- status information 1, sub- status information 2 and sub- shape State information 3, then sub- status information 1 is waves to the left, and sub- status information 2 is that hand is vertical, and sub- status information 3 is waves to the right, root Waved according to status information corresponding to the determination of sub- status information 1, sub- status information 2 and sub- status information 3 for left and right.
S703, determine action model corresponding to the status information;
When it is determined that after multidate information, the dynamic model according to corresponding to the multidate information of determination determines the multidate information, for Each sub- status information of subject image element shown in Fig. 8, corresponding status information for left and right wave, now, it is determined that action The action model that model is waved for left and right.
Need to illustrate when, can be directly with the shape of the subject image element in each associated images it is determined that during action model State information is action model, and it is dynamic that generation can be also added on the basis of the status information of the subject image element in each associated images The side information generation action model of state figure.When can not according to sub- status information of the subject image element in associated images , can be on the basis of the Dynamic Graph that the subject image element in each associated images is merged when immediately arriving at a continuous action Increase the side information lacked, such as:When three sub- status informations shown in Fig. 8 are three states of continuous action, Angle in Fig. 8 (a) is 20 degree counterclockwise to the right, and the angle in Fig. 8 (b) is 10 degree clockwise to the left, the angle in Fig. 8 (c) For 40 degree clockwise to the left, one can be immediately arrived at continuously according to sub- status information of the subject image element in associated images The action waved of left and right, then directly waved using the Dynamic Graph that the subject image element in each associated images is merged as left and right Action model;Angle in Fig. 8 (a) is 60 degree counterclockwise to the right, and the angle in Fig. 8 (b) is 10 degree clockwise to the left, Fig. 8 (c) angle in is 40 degree clockwise to the left, can not according to sub- status information of the subject image element in associated images Immediately arrive at the action that a continuous left and right is waved, the then Dynamic Graph that the subject image element in each associated images is merged On the basis of supplement the side informations of 20 degree of works counterclockwise to the left, the action model waved of synthesis left and right.
S704, the image procossing generation pending image is carried out to the subject image element according to the action model Corresponding dynamic image.
It is determined that after action model, the action model of determination is subjected to image procossing to subject image element, generation has Subject image element is the dynamic image of dynamic change, now, subject image element Dynamic Announce, beyond subject image element Part is static state display.
In actual applications, when user needs Dynamic Announce, terminal can receive one when receiving photographing instruction During individual photographing instruction, multiple continuous associated images are shot, and image procossing, generation one are carried out to multiple continuous associated images Dynamic Graph, now, can be compressed to multiple images, while Dynamic Announce and be not take up extra memory space.
In image processing method provided in an embodiment of the present invention, when pending image includes multiple continuous associated images When, state analysis is carried out to the subject image element in each associated images, according to subject image element in each associated images Sub- status information determines the status information of subject image element, so as to the dynamic analog according to corresponding to multiple continuous associated diagrams determinations Type.
Embodiment three
Based on foregoing embodiment, the embodiment of the present invention provides a kind of image processing method, and this method is applied to terminal, should The function that method is realized can realize that certain program code can preserve by the processor caller code in terminal In computer-readable storage medium, it is seen then that the terminal comprises at least processor and storage medium.
Fig. 9 is the schematic flow sheet of the image processing method in the embodiment of the present invention three, as shown in figure 9, including:
S901, image recognition is carried out to pending image, identify the subject image element of pending image;
S902, when the subject image element includes at least two associated objects pictorial elements, to each associated objects figure Pixel element carries out state analysis, determines the association status information of each associated objects pictorial element, and according to each association status Relation between information and each association status information determines action model corresponding to each association status information;
When identifying subject image element, the subject image element identified includes a variety of possible situations:One mesh A variety of possible situations such as the subject image element of logo image element, multiple independent subject image elements or multiple associations, this In, when including multiple subject image elements, it can also include independent subject image element and the target image member associated simultaneously Element, such as:Subject image element includes element 1, element 2, element 3 and element 4.Element 1, element 3 are respectively independent target Pictorial element, element 2 and element 4 are the subject image element associated.Here, associated objects pictorial element has action interaction Subject image element.Such as:Subject image element A is hand, and subject image element B is apple, when apple is located on hand, then really The pictorial element A and subject image element B that sets the goal is associated objects pictorial element.
After carrying out state analysis to associated objects pictorial element, obtained status information is referred to as association status information, according to Relation between each association status information and each association status information determines state model corresponding to associated objects pictorial element, than Such as, the association status information of subject image element A hands is to hold the hand of apple, and the status information of subject image element B apples is Apple is static, it is determined that associated objects pictorial element action model when, moved up it is determined that being held by apple, now, target For the action model of pictorial element A hands to move up, the action models of subject image element B apples is with setting about moving up.
S903, the image procossing generation pending image is carried out to the subject image element according to the action model Corresponding dynamic image.
In embodiments of the present invention, when pending image includes the associated objects pictorial element of multiple associations, true When determining action model corresponding to associated objects pictorial element, not only with reference to the associated objects pictorial element status information of itself, go back With reference to its associated objects pictorial element with incidence relation status information between relation, so as to accurately positioning each mesh The action model of logo image element.
Example IV
Based on foregoing embodiment of the method, the embodiment of the present invention provides a kind of image processing apparatus, as shown in Figure 10, described Device includes:Memory 1001, processor 1002 and it is stored in the meter that can be run on memory 1002 and on processor 1001 Calculation machine program, processor 1002 perform the computer program, to realize:
Image recognition is carried out to pending image, identifies the subject image element of the pending image;To the mesh Logo image element carries out state analysis and determines the status information of the subject image element, and determines corresponding to the status information Action model;It is corresponding that the image procossing generation pending image is carried out to the subject image element according to the action model Dynamic image.
When processor 1002 performs the computer program, realize it is described image recognition is carried out to the pending image, Identifying the subject image element of pending image includes:Lookup edge treated is carried out to the pending image, obtained described Element outline corresponding to pictorial element in pending image;The target figure in described image element is determined using the element outline Pixel element.
When processor 1002 performs the computer program, realize that the state to subject image element analyze really The status information of the fixed subject image element includes:3 D stereo is carried out to the subject image element to analyze to obtain the mesh The status information of logo image element.
When processor 1002 performs the computer program, when the pending image includes at least two associated images, Realize that the state to the subject image element carries out analysis and determines that the status information of the subject image element includes:Point The other subject image element in associated images carries out state analysis, determines the target image member in each associated images The sub- status information of element;The status information of the subject image element is determined according to each sub- status information.
When processor 1002 performs the computer program, when the subject image element includes at least two associated objects During pictorial element, realize that the state analysis that carried out to the subject image element determines that the state of the subject image element is believed Breath, and determine that action model corresponding to the status information includes:
State analysis is carried out to each associated objects pictorial element, determines the association status of each associated objects pictorial element Information, and each association status information pair is determined according to the relation between each association status information and each association status information The action model answered.
It should be noted that memory 1001 in the embodiment of the present invention can be in corresponding diagram 1 memory 109, processor 1002 can be in corresponding diagram 1 processor 110.
Embodiment five
To realize the above method, the embodiment of the present invention also provides a kind of computer-readable recording medium, and the computer can Read to be stored with computer program in storage medium, the computer program is realized when being executed by processor:Pending image is entered Row image recognition, identify the subject image element of the pending image;State analysis is carried out to the subject image element The status information of the subject image element is determined, and determines action model corresponding to the status information;According to the action Model carries out image procossing to the subject image element and generates dynamic image corresponding to the pending image.
When the computer program is executed by processor, realize described to the pending image progress image recognition, knowledge Not going out the subject image element of pending image includes:Lookup edge treated is carried out to the pending image, obtains described treat Handle element outline corresponding to pictorial element in image;The target image in described image element is determined using the element outline Element.
When the computer program is executed by processor, realize that the state to subject image element carries out analysis determination The status information of the subject image element includes:3 D stereo is carried out to the subject image element to analyze to obtain the target The status information of pictorial element.
It is real when the pending image includes at least two associated images when the computer program is executed by processor The existing state to the subject image element carries out analysis and determines that the status information of the subject image element includes:Respectively State analysis is carried out to the subject image element in associated images, determines the subject image element in each associated images Sub- status information;The status information of the subject image element is determined according to each sub- status information.
When the computer program is executed by processor, when the subject image element includes at least two associated objects figures During pixel element, realize that the state analysis that carried out to the subject image element determines that the state of the subject image element is believed Breath, and determine that action model corresponding to the status information includes:
State analysis is carried out to each associated objects pictorial element, determines the association status of each associated objects pictorial element Information, and each association status information pair is determined according to the relation between each association status information and each association status information The action model answered.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row His property includes, so that process, method, article or device including a series of elements not only include those key elements, and And also include the other element being not expressly set out, or also include for this process, method, article or device institute inherently Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this Other identical element also be present in the process of key element, method, article or device.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on such understanding, technical scheme is substantially done to prior art in other words Going out the part of contribution can be embodied in the form of software product, and the computer software product is stored in a storage medium In (such as ROM/RAM, magnetic disc, CD), including some instructions to cause a station terminal (can be mobile phone, computer, service Device, air conditioner, or network equipment etc.) perform method described in each embodiment of the present invention.
Embodiments of the invention are described above in conjunction with accompanying drawing, but the invention is not limited in above-mentioned specific Embodiment, above-mentioned embodiment is only schematical, rather than restricted, one of ordinary skill in the art Under the enlightenment of the present invention, in the case of present inventive concept and scope of the claimed protection is not departed from, it can also make a lot Form, these are belonged within the protection of the present invention.

Claims (10)

1. a kind of image processing method, it is characterised in that methods described includes:
Image recognition is carried out to pending image, identifies the subject image element of the pending image;
State analysis is carried out to the subject image element and determines the status information of the subject image element, and determines the shape Action model corresponding to state information;
The subject image element is carried out according to the action model to move corresponding to the image procossing generation pending image State image.
2. according to the method for claim 1, it is characterised in that it is described that image recognition is carried out to the pending image, know Not going out the subject image element of pending image includes:
Lookup edge treated is carried out to the pending image, obtains element wheel corresponding to pictorial element in the pending image It is wide;
The subject image element in described image element is determined using the element outline.
3. according to the method for claim 1, it is characterised in that the state to subject image element carries out analysis determination The status information of the subject image element includes:
3 D stereo is carried out to the subject image element to analyze to obtain the status information of the subject image element.
4. according to the method for claim 1, it is characterised in that when the pending image includes at least two associated diagrams Picture, the state to the subject image element carry out analysis and determine that the status information of the subject image element includes:
State analysis is carried out to the subject image element in associated images respectively, determines the target in each associated images The sub- status information of pictorial element;
The status information of the subject image element is determined according to each sub- status information.
5. according to the method for claim 1, it is characterised in that when the subject image element includes at least two association mesh During logo image element, the state analysis that carried out to the subject image element determines that the state of the subject image element is believed Breath, and determine that action model corresponding to the status information includes:
State analysis is carried out to each associated objects pictorial element, determines the association status letter of each associated objects pictorial element Breath, and determine that each association status information is corresponding according to the relation between each association status information and each association status information Action model.
6. a kind of image processing apparatus, it is characterised in that described device includes:Memory, processor and it is stored in the storage On device and the computer program that can run on the processor, computer program described in the computing device, to realize:
Image recognition is carried out to pending image, identifies the subject image element of the pending image;
State analysis is carried out to the subject image element and determines the status information of the subject image element, and determines the shape Action model corresponding to state information;
The subject image element is carried out according to the action model to move corresponding to the image procossing generation pending image State image.
7. device according to claim 6, it is characterised in that described in the computing device during computer program, realize Described to carry out image recognition to pending image, identifying the subject image element of pending image includes:
Lookup edge treated is carried out to the pending image, obtains element wheel corresponding to pictorial element in the pending image It is wide;
The subject image element in described image element is determined using the element outline.
8. device according to claim 6, it is characterised in that described in the computing device during computer program, work as institute Stating pending image includes at least two associated images, realizes that the state to the subject image element carries out analysis determination The status information of the subject image element includes:
State analysis is carried out to the subject image element in associated images respectively, determines the target in each associated images The sub- status information of pictorial element;
The status information of the subject image element is determined according to each sub- status information.
9. device according to claim 6, it is characterised in that described in the computing device during computer program, work as institute When stating subject image element includes at least two associated objects pictorial elements, realize described to subject image element progress shape State analysis determines the status information of the subject image element, and determines that action model corresponding to the status information includes:
State analysis is carried out to each associated objects pictorial element, determines the association status letter of each associated objects pictorial element Breath, and determine that each association status information is corresponding according to the relation between each association status information and each association status information Action model.
10. a kind of computer-readable recording medium, it is characterised in that be stored with computer on the computer-readable recording medium Program, the computer program realize the image processing method any one of claim 1 to 5 when being executed by processor.
CN201711066232.7A 2017-10-30 2017-10-30 A kind of image processing method, device and computer-readable recording medium Pending CN107657638A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711066232.7A CN107657638A (en) 2017-10-30 2017-10-30 A kind of image processing method, device and computer-readable recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711066232.7A CN107657638A (en) 2017-10-30 2017-10-30 A kind of image processing method, device and computer-readable recording medium

Publications (1)

Publication Number Publication Date
CN107657638A true CN107657638A (en) 2018-02-02

Family

ID=61096332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711066232.7A Pending CN107657638A (en) 2017-10-30 2017-10-30 A kind of image processing method, device and computer-readable recording medium

Country Status (1)

Country Link
CN (1) CN107657638A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583514A (en) * 2018-12-19 2019-04-05 成都西纬科技有限公司 A kind of image processing method, device and computer storage medium
CN110322565A (en) * 2018-03-30 2019-10-11 深圳市掌网科技股份有限公司 A kind of image processing method and device
CN111787240A (en) * 2019-04-28 2020-10-16 北京京东尚科信息技术有限公司 Video generation method, device and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473799A (en) * 2013-09-02 2013-12-25 腾讯科技(深圳)有限公司 Picture dynamic processing method, device and terminal equipment
CN105049747A (en) * 2015-08-06 2015-11-11 广州市博源数码科技有限公司 System for identifying static image and converting static image into dynamic display
CN105577917A (en) * 2015-11-27 2016-05-11 小米科技有限责任公司 Photograph display method and device and intelligent terminal
CN105786417A (en) * 2014-12-19 2016-07-20 阿里巴巴集团控股有限公司 Method, device and equipment for dynamically displaying static pictures
CN106127829A (en) * 2016-06-28 2016-11-16 广东欧珀移动通信有限公司 The processing method of a kind of augmented reality, device and terminal
CN106251388A (en) * 2016-08-01 2016-12-21 乐视控股(北京)有限公司 Photo processing method and device
CN106791032A (en) * 2016-11-30 2017-05-31 世优(北京)科技有限公司 The method and apparatus that still image is converted to dynamic image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473799A (en) * 2013-09-02 2013-12-25 腾讯科技(深圳)有限公司 Picture dynamic processing method, device and terminal equipment
CN105786417A (en) * 2014-12-19 2016-07-20 阿里巴巴集团控股有限公司 Method, device and equipment for dynamically displaying static pictures
CN105049747A (en) * 2015-08-06 2015-11-11 广州市博源数码科技有限公司 System for identifying static image and converting static image into dynamic display
CN105577917A (en) * 2015-11-27 2016-05-11 小米科技有限责任公司 Photograph display method and device and intelligent terminal
CN106127829A (en) * 2016-06-28 2016-11-16 广东欧珀移动通信有限公司 The processing method of a kind of augmented reality, device and terminal
CN106251388A (en) * 2016-08-01 2016-12-21 乐视控股(北京)有限公司 Photo processing method and device
CN106791032A (en) * 2016-11-30 2017-05-31 世优(北京)科技有限公司 The method and apparatus that still image is converted to dynamic image

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322565A (en) * 2018-03-30 2019-10-11 深圳市掌网科技股份有限公司 A kind of image processing method and device
CN109583514A (en) * 2018-12-19 2019-04-05 成都西纬科技有限公司 A kind of image processing method, device and computer storage medium
CN111787240A (en) * 2019-04-28 2020-10-16 北京京东尚科信息技术有限公司 Video generation method, device and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN107425888A (en) Multi-input/output antenna, mobile terminal and antenna switching method
CN107566635A (en) Screen intensity method to set up, mobile terminal and computer-readable recording medium
CN107682627A (en) A kind of acquisition parameters method to set up, mobile terminal and computer-readable recording medium
CN107817897A (en) A kind of information intelligent display methods and mobile terminal
CN107517405A (en) The method, apparatus and computer-readable recording medium of a kind of Video processing
CN107145385A (en) A kind of multitask interface display methods, mobile terminal and computer-readable storage medium
CN107704176A (en) A kind of picture-adjusting method and terminal
CN107333056A (en) Image processing method, device and the computer-readable recording medium of moving object
CN106953684A (en) A kind of method for searching star, mobile terminal and computer-readable recording medium
CN107657583A (en) A kind of screenshot method, terminal and computer-readable recording medium
CN107181865A (en) Processing method, terminal and the computer-readable recording medium of unread short messages
CN108172161A (en) Display methods, mobile terminal and computer readable storage medium based on flexible screen
CN107239205A (en) A kind of photographic method, mobile terminal and storage medium
CN107506163A (en) Adjust method, terminal and the computer-readable recording medium of screen display
CN108196777A (en) A kind of flexible screen application process, equipment and computer readable storage medium
CN108197206A (en) Expression packet generation method, mobile terminal and computer readable storage medium
CN107589895A (en) Select method, mobile terminal and the computer-readable recording medium of text
CN107483804A (en) A kind of image pickup method, mobile terminal and computer-readable recording medium
CN107168626A (en) A kind of information processing method, equipment and computer-readable recording medium
CN107657638A (en) A kind of image processing method, device and computer-readable recording medium
CN108334302A (en) A kind of sound transmission method, mobile terminal and computer readable storage medium
CN107450796A (en) A kind of image processing method, mobile terminal and computer-readable recording medium
CN107566608A (en) A kind of system air navigation aid, equipment and computer-readable recording medium
CN107831984A (en) A kind of switching method, terminal and the computer-readable recording medium of singlehanded pattern
CN107562719A (en) A kind of label processing method, equipment and computer-readable recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180202

RJ01 Rejection of invention patent application after publication