CN108230312A - A kind of image analysis method, equipment and computer readable storage medium - Google Patents

A kind of image analysis method, equipment and computer readable storage medium Download PDF

Info

Publication number
CN108230312A
CN108230312A CN201810005265.9A CN201810005265A CN108230312A CN 108230312 A CN108230312 A CN 108230312A CN 201810005265 A CN201810005265 A CN 201810005265A CN 108230312 A CN108230312 A CN 108230312A
Authority
CN
China
Prior art keywords
image
area
target component
region
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810005265.9A
Other languages
Chinese (zh)
Inventor
马栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201810005265.9A priority Critical patent/CN108230312A/en
Publication of CN108230312A publication Critical patent/CN108230312A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a kind of image analysis method, the method includes:Obtain the first image and the second image;First image is collected for target object using the image acquisition device of first terminal, and the second image is collected for target object using the image acquisition device of second terminal, and the acquisition angles of the first image and the second image are identical;First image is carried out to handle determining first area, and the second image is carried out to handle determining second area;Calculate the first object parameter of first area and the second target component of second area;First object parameter is used to indicate the clarity of the image of first area, and the second target component is used to indicate the clarity of the image of second area;The first image is analyzed and processed based on first object parameter and the second target component.The embodiment of the present invention also discloses a kind of image analysis equipment and computer readable storage medium simultaneously, solves the problems, such as there is test inaccuracy in existing test method.

Description

A kind of image analysis method, equipment and computer readable storage medium
Technical field
The present invention relates to the image processing techniques in computer realm more particularly to a kind of image analysis method, equipment and Computer readable storage medium.
Background technology
With the development of science and technology, the function of intelligent terminal is stronger and stronger;Also, with carrying for living standards of the people Height, the use of intelligent terminal are more and more universal.For a user, the camera function of intelligent terminal such as smart mobile phone is most often .
Producer, generally all can be to the matter of the camera in mobile phone before manufacture after the camera in producing mobile phone Amount is detected;Current most common method is taken a picture using the camera in testing mobile phone, and by the photo and in advance The first selected photo of same angle for being shot in the comparison mobile phone that is compared is compared, and then determining testing mobile phone In camera take come photo quality;But compared in a manner that artificial naked eyes compare in existing method , there can be the problem of test is inaccurate, and compare labor intensive, cause the significant wastage of human resources.
Invention content
In view of this, an embodiment of the present invention is intended to provide a kind of image analysis method, equipment and computer-readable storage mediums Matter solves the problems, such as there is test inaccuracy in the test method of existing test machine camera, improves taking the photograph for test machine As the accuracy of the test effect of head, the waste of human resources is avoided;Also, enhance the intelligence degree of machine.
In order to achieve the above objectives, the technical proposal of the invention is realized in this way:
A kind of image analysis method, the method includes:
Obtain the first image and the second image;Wherein, described first image is the image acquisition device needle using first terminal Collected to target object, second image is to be acquired using the image acquisition device of second terminal for the target object It arrives, described first image is identical with the acquisition angles of second image;
Described first image is carried out to handle determining first area, and processing is carried out to second image and determines the secondth area Domain;
Calculate the first object parameter of the first area and the second target component of the second area;Wherein, it is described First object parameter is used to indicate the clarity of the image of the first area, and second target component is used to indicate described The clarity of the image in two regions;
Described first image is analyzed and processed based on the first object parameter and second target component.
Optionally, it is described to described first image handle determining first area, and to second image at Reason determines second area, including:
Described first image is handled using preset algorithm, the first area is determined from described first image;
Second image is handled using the preset algorithm, secondth area is determined from second image Domain.
Optionally, it is described that described first image is handled using preset algorithm, institute is determined from described first image First area is stated, including:
Edge analysis is carried out to described first image using preset algorithm, obtains the target pair in described first image The first position of the profile of elephant;
Based on the first position information, the first area is determined from described first image.
Optionally, it is described based on the first position information, the first area, packet are determined from described first image It includes:
According to default determining method, first edge region and the first central area are determined from described first image;
Determine whether include any area where the first position in the first edge region and the first central area Domain;
If the first edge region and the first central area include any region where the first position, determine The first edge region and first central area are the first area.
Optionally, it is described that second image is handled using the preset algorithm, from second image really The fixed second area, including:
Edge analysis is carried out to second image using preset algorithm, obtains the target pair in second image The second position of the profile of elephant;
Based on the second position information, the second area is determined from second image.
Optionally, it is described based on the second position information, the second area, packet are determined from second image It includes:
According to default determining method, second edge region and the second central area are determined from second image;
Determine whether include any area where the second position in the second edge region and the second central area Domain;
If the second edge region and the second central area include any region where the second position, determine The second edge region and second central area are the second area.
Optionally, it is described that described first image is divided based on the first object parameter and second target component Analysis is handled, including:
Obtain the magnitude relationship between the first object parameter and second target component;
If the first object parameter is less than second target component, determine that the clarity of described first image is not met Preset requirement.
Optionally, it is described that first image of target object is directed to, and use using the image acquisition device acquisition of first terminal Before the image acquisition device acquisition of second terminal is for the second image of the target object, further include:
When obtaining acquisition described first image and second image, the position of the motor in described image collector;
Based on the position of the motor, when determining described image collector acquisition described first image and second image Whether focusing is accurate.
A kind of image analysis equipment, the equipment include:Processor, memory and communication bus;
The communication bus is used to implement the communication connection between the processor and the memory;
The processor is used to perform the image analysis program stored in the memory, to realize following steps:
Obtain the first image and the second image;Wherein, described first image is the image acquisition device needle using first terminal Collected to target object, second image is to be acquired using the image acquisition device of second terminal for the target object It arrives, described first image is identical with the acquisition angles of second image;
Described first image is carried out to handle determining first area, and processing is carried out to second image and determines the secondth area Domain;
Calculate the first object parameter of the first area and the second target component of the second area;Wherein, it is described First object parameter is used to indicate the clarity of the image of the first area, and second target component is used to indicate described The clarity of the image in two regions;
Described first image is analyzed and processed based on the first object parameter and second target component.
A kind of computer readable storage medium, there are one the computer-readable recording medium storages or multiple programs, One or more of programs can be performed by one or more processor, to realize image analysis method as described above Step.
Image analysis method, equipment and the computer readable storage medium that the embodiment of the present invention is provided obtain first Image and the second image, the first image are to use the image acquisition device of first terminal collected for target object, the second figure Seem to use the image acquisition device of second terminal collected for target object, the acquisition angles of the first image and the second image It is identical, later the first image is carried out handling determining first area and the second image is carried out to handle determining second area, calculated The first object parameter of first area and the second target component of second area, first object parameter are used to indicate first area The clarity of image, the second target component are used to indicate the clarity of the image of second area, are finally based on first object parameter The first image is analyzed and processed with the second target component, in this way, image analysis equipment can be automatically by collected two Image is compared, and does not need to compare using manpower, is solved to exist in the test method of existing test machine camera and be surveyed The problem of examination is inaccurate, improves the accuracy of the test effect of the camera of test machine, avoids the waste of human resources;And And enhance the intelligence degree of machine.
Description of the drawings
The hardware architecture diagram of Fig. 1 optional mobile terminal of each embodiment to realize the present invention;
Fig. 2 is the communication system architecture schematic diagram that mobile terminal provided in an embodiment of the present invention can operate;
Fig. 3 is a kind of flow diagram of image analysis method provided in an embodiment of the present invention;
Fig. 4 is the flow diagram of another image analysis method provided in an embodiment of the present invention;
Fig. 5 is the schematic diagram of a kind of first image provided in an embodiment of the present invention and the second image;
Fig. 6 is the schematic diagram that a kind of money provided in an embodiment of the present invention selects image-region;
Fig. 7 is the schematic diagram for the first area selected in a kind of first image provided in an embodiment of the present invention;
Fig. 8 is the flow diagram of another image analysis method provided in an embodiment of the present invention;
Fig. 9 is the structure diagram of a kind of image analysis equipment that the embodiment of the present invention provides.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete Site preparation describes.
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In subsequent description, using for representing that the suffix of such as " module ", " component " or " unit " of element is only Be conducive to the explanation of the present invention, itself there is no a specific meaning.Therefore, " module ", " component " or " unit " can mix Ground uses.
Terminal can be implemented in a variety of manners.For example, terminal described in the present invention can include such as mobile phone, tablet Computer, laptop, palm PC, personal digital assistant (Personal Digital Assistant, PDA), portable The shiftings such as media player (Portable Media Player, PMP), navigation device, wearable device, Intelligent bracelet, pedometer The dynamic fixed terminals such as terminal and number TV, desktop computer.
It will be illustrated by taking mobile terminal as an example in subsequent descriptions, it will be appreciated by those skilled in the art that in addition to special For moving except the element of purpose, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Referring to Fig. 1, a kind of hardware architecture diagram of its mobile terminal of each embodiment to realize the present invention, the shifting Dynamic terminal 100 can include:Radio frequency (Radio Frequency, RF) unit 101, Wi-Fi module 102, audio output unit 103rd, A/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108th, the components such as memory 109, processor 110 and power supply 111.It will be understood by those skilled in the art that shown in Fig. 1 Mobile terminal structure does not form the restriction to mobile terminal, and mobile terminal can be included than illustrating more or fewer components, Either combine certain components or different components arrangement.
The all parts of mobile terminal are specifically introduced with reference to Fig. 1:
Radio frequency unit 101 can be used for receive and send messages or communication process in, signal sends and receivees, specifically, by base station Downlink information receive after, handled to processor 110;In addition, the data of uplink are sent to base station.In general, radio frequency unit 101 Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, it penetrates Frequency unit 101 can also communicate with network and other equipment by radio communication.Above-mentioned wireless communication can use any communication Standard or agreement, including but not limited to global system for mobile communications (Global System of Mobile Communication, GSM), general packet radio service (General Packet Radio Service, GPRS), code it is point more Location 2000 (Code Division Multiple Access 2000, CDMA2000), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), TD SDMA (Time Division- Synchronous Code Division Multiple Access, TD-SCDMA), frequency division duplex long term evolution (Frequency Division Duplexing-Long Term Evolution, FDD-LTE) and time division duplex long term evolution (Time Division Duplexing-Long Term Evolution, TDD-LTE) etc..
Wi-Fi belongs to short range wireless transmission technology, and mobile terminal can help user to receive and dispatch by Wi-Fi module 102 Email, browsing webpage and access streaming video etc., it has provided wireless broadband internet to the user and has accessed.Although Fig. 1 Show Wi-Fi module 102, but it is understood that, and must be configured into for mobile terminal is not belonging to, it completely can basis It needs to omit in the range for the essence for not changing invention.
Audio output unit 103 can be in call signal reception pattern, call mode, record mould in mobile terminal 100 Formula, speech recognition mode, broadcast reception mode when under isotypes, it is that radio frequency unit 101 or Wi-Fi module 102 are received or The audio data that person stores in memory 109 is converted into audio signal and exports as sound.Moreover, audio output unit 103 can also provide performed with mobile terminal 100 the relevant audio output of specific function (for example, call signal receive sound, Message sink sound etc.).Audio output unit 103 can include loud speaker, buzzer etc..
A/V input units 104 are used to receive audio or video signal.A/V input units 104 can include graphics processor (Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode Or the static images or the image data of video obtained in image capture mode by image capture apparatus (such as camera) carry out Reason.Treated, and picture frame may be displayed on display unit 106.Through graphics processor 1041, treated that picture frame can be deposited Storage is sent in memory 109 (or other storage mediums) or via radio frequency unit 101 or Wi-Fi module 102.Mike Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042 Quiet down sound (audio data), and can be audio data by such acoustic processing.Audio that treated (voice) data can To be converted to the form output that mobile communication base station can be sent to via radio frequency unit 101 in the case of telephone calling model. Microphone 1042 can implement various types of noises elimination (or inhibition) algorithms and send and receive sound to eliminate (or inhibition) The noise generated during frequency signal or interference.
Mobile terminal 100 further includes at least one sensor 105, such as optical sensor, motion sensor and other biographies Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein, ambient light sensor can be according to environment The light and shade of light adjusts the brightness of display panel 1061, and proximity sensor can close when mobile terminal 100 is moved in one's ear Display panel 1061 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (general For three axis) size of acceleration, size and the direction of gravity are can detect that when static, can be used to identify the application of mobile phone posture (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.; The fingerprint sensor that can also configure as mobile phone, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, The other sensors such as hygrometer, thermometer, infrared ray sensor, details are not described herein.
Display unit 106 is used to show by information input by user or be supplied to the information of user.Display unit 106 can wrap Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode may be used Display panel 1061 is configured in forms such as (Organic Light-Emitting Diode, OLED).
User input unit 107 can be used for receiving the number inputted or character information and generation and the use of mobile terminal The key signals input that family is set and function control is related.Specifically, user input unit 107 may include touch panel 1071 with And other input equipments 1072.Touch panel 1071, also referred to as touch screen collect user on it or neighbouring touch operation (for example user uses any suitable objects such as finger, stylus or attachment on touch panel 1071 or in touch panel 1071 Neighbouring operation), and corresponding attachment device is driven according to preset formula.Touch panel 1071 may include touch detection Two parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation band The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it Contact coordinate is converted into, then gives processor 110, and the order that processor 110 is sent can be received and performed.It in addition, can To realize touch panel 1071 using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves.In addition to touch panel 1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 can wrap It includes but is not limited to physical keyboard, in function key (such as volume control button, switch key etc.), trace ball, mouse, operating lever etc. It is one or more, do not limit herein specifically.
Further, touch panel 1071 can cover display panel 1061, when touch panel 1071 detect on it or After neighbouring touch operation, processor 110 is sent to determine the type of touch event, is followed by subsequent processing device 110 according to touch thing The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, touch panel 1071 and display panel 1061 be the component independent as two to realize the function that outputs and inputs of mobile terminal, but in certain embodiments, it can The function that outputs and inputs of mobile terminal is realized so that touch panel 1071 and display panel 1061 is integrated, is not done herein specifically It limits.
Interface unit 108 be used as at least one external device (ED) connect with mobile terminal 100 can by interface.For example, External device (ED) can include wired or wireless head-band earphone port, external power supply (or battery charger) port, wired or nothing Line data port, memory card port, the port for device of the connection with identification module, audio input/output (I/O) end Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in mobile terminal 100 or can be with For transmitting data between mobile terminal 100 and external device (ED).
Memory 109 can be used for storage software program and various data.Memory 109 can mainly include storing program area And storage data field, wherein, storing program area can storage program area, application program (such as the sound needed at least one function Sound playing function, image player function etc.) etc.;Storage data field can store according to mobile phone use created data (such as Audio data, phone directory etc.) etc..In addition, memory 109 can include high-speed random access memory, can also include non-easy The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of mobile terminal, utilizes each of various interfaces and the entire mobile terminal of connection A part is stored in storage by running or performing the software program being stored in memory 109 and/or module and call Data in device 109 perform the various functions of mobile terminal and processing data, so as to carry out integral monitoring to mobile terminal.Place Reason device 110 may include one or more processing units;Preferably, processor 110 can integrate application processor and modulatedemodulate is mediated Device is managed, wherein, the main processing operation system of application processor, user interface and application program etc., modem processor is main Processing wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Mobile terminal 100 can also include the power supply 111 (such as battery) powered to all parts, it is preferred that power supply 111 Can be logically contiguous by power-supply management system and processor 110, so as to realize management charging by power-supply management system, put The functions such as electricity and power managed.
Although Fig. 1 is not shown, mobile terminal 100 can also be including bluetooth module etc., and details are not described herein.
For the ease of understanding the embodiment of the present invention, below to the communications network system that is based on of mobile terminal of the present invention into Row description.
Referring to Fig. 2, Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention, the communication network system The LTE system united as universal mobile communications technology, the LTE system include communicating the user equipment (User of connection successively Equipment, UE) 201, evolved UMTS Terrestrial radio access network (Evolved UMTS Terrestrial Radio Access Network, E-UTRAN) 202, evolved packet-based core networks (Evolved Packet Core, EPC) 203 and operation The IP operation 204 of quotient.
Specifically, UE201 can be above-mentioned terminal 100, and details are not described herein again.
E-UTRAN202 includes eNodeB2021 and other eNodeB2022 etc..Wherein, eNodeB2021 can be by returning Journey (backhaul) (such as X2 interface) is connect with other eNodeB2022, and eNodeB2021 is connected to EPC203, ENodeB2021 can provide the access of UE201 to EPC203.
EPC203 can include mobility management entity (Mobility Management Entity, MME) 2031, ownership Client server (Home Subscriber Server, HSS) 2032, other MME2033, gateway (Serving Gate Way, SGW) 2034, grouped data network gateway (PDN Gate Way, PGW) 2035 and policy and rate functional entity (Policy and Charging Rules Function, PCRF) 2036 etc..Wherein, MME2031 be processing UE201 and The control node of signaling, provides carrying and connection management between EPC203.HSS2032 is all to manage for providing some registers Such as the function of home location register (not shown) etc, and some are preserved in relation to use such as service features, data rates The dedicated information in family.All customer data can be sent by SGW2034, and PGW2035 can provide the IP of UE 201 Address is distributed and other functions, and PCRF2036 is business data flow and the strategy of IP bearing resources and charging control strategic decision-making Point, it selects and provides available strategy and charging control decision with charge execution function unit (not shown) for strategy.
IP operation 204 can include internet, Intranet, IP multimedia subsystem (IP Multimedia Subsystem, IMS) or other IP operations etc..
Although above-mentioned be described by taking LTE system as an example, those skilled in the art it is to be understood that the present invention not only Suitable for LTE system, be readily applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA with And following new network system etc., it does not limit herein.
Based on above-mentioned mobile terminal hardware configuration and communication system, each embodiment of the present invention is proposed.
The embodiment of the present invention provides a kind of image analysis method, and with reference to shown in Fig. 3, this method can include following step Suddenly:
Step 301 obtains the first image and the second image.
Wherein, the first image is to use the image acquisition device of first terminal collected for target object, the second image It is to use the image acquisition device of second terminal collected for target object, the acquisition angles phase of the first image and the second image Together.
In other embodiments of the invention, step 301 obtains the first image and the second image and can be set by image analysis It is standby to realize;The image analysis equipment can be the equipment that analysis comparison can be carried out to the image got, such as can be Server or terminal.First image and the second image can be that first terminal and the image acquisition device of second terminal is respectively adopted It collects, and the first image and the second image can be obtained for same object using identical angle acquisition 's.
In a kind of feasible realization method, first terminal can be test machine that needs to be tested, such as can be Need the testing mobile phone tested;Second terminal can be the comparison machine compared with test machine, for example, can be user according to The higher comparison mobile phone of quality of the hobby of oneself or the camera of actual demand selection, which can be XX models Mobile phone.It should be noted that first terminal is different from second terminal, image acquisition device can be first terminal and second terminal In camera.
Before the camera for the camera and second terminal for using first terminal acquires the first image and the second image, need First to detect first terminal camera and second terminal camera it is whether in running order;Detect the camera shooting of first terminal Whether head is with the camera of second terminal in running order, can be by detecting the camera and second terminal of first terminal Whether camera, which is opened, is realized;If the camera of first terminal and the camera of second terminal are opened, illustrate first terminal Camera and the camera of second terminal are in running order.At this point it is possible to camera and second terminal using first terminal Camera shooting image.
Step 302 carries out the first image to handle determining first area, and processing is carried out to the second image and determines the secondth area Domain.
In other embodiments of the invention, step 302 carries out the first image to handle determining first area and to second Image is carried out handling determining second area and can be realized by image analysis equipment.
Wherein, step 302 to the first image handle determining first area, and processing is carried out to the second image and determines the Two regions can be accomplished by the following way:
Step 302a, the first image is handled using preset algorithm, first area is determined from the first image.
Step 302b, the second image is handled using preset algorithm, second area is determined from the second image.
Wherein, preset algorithm is a kind of pre-set algorithm that region division can be carried out to image;A kind of feasible Realization method in, preset algorithm can include carry out edge analysis algorithm, for example, carry out edge analysis algorithm can wrap Include open source code computer vision class libraries (open source computer vision library, opencv).Specifically , edge detection is carried out, and then obtain the first image and the second image to the first image and the second image using opencv respectively The profile of the object included.Later, based on the profile of object that the first image and the second image include to the first image and Second image carries out region division, obtains the second area in the first area and the second image in the first image;It needs to illustrate , first area and second area can respectively include at least five regions.
It should be noted that can be handled using identical algorithm to the first image and the second image, also may be used Being handled using different algorithms, still, the final result of processing is all the first area that determine in the first image, Determine the second area in the second image.
Second target component of step 303, the first object parameter for calculating first area and second area.
Wherein, first object parameter is used to indicate the clarity of the image of first area, and the second target component is used to indicate The clarity of the image of second area.
In other embodiments of the invention, step 303 calculates the first object parameter and second area of first area Second target component can be realized by image analysis equipment;Calculate first area first object parameter and second area the Two target components can be realized using a kind of parameter computational algorithm;For example, the parameter computational algorithm can include space Frequency response (spatial frequency response, SFR) algorithm;At this point, first object parameter and the second target component It can include modulation transfer function (Modulation Transfer Function, MTF).First object parameter is the firstth area The mtf value in domain, the second target component are the mtf values of second area.
MTF refers to that modulation degree is known as modulation degree transmission function with the function of spatial frequency variation;MTF functions be most initially for Illustrate the ability of camera lens.It is passed through in each cam lens and describes the MTF curve of camera lens frequently with MTF to show the energy of camera lens Power.These curves are by reducing other systems under ideal test environment to the greatest extent in the case of the attenuation of the parsing power of camera lens What test obtained.But MTF can also cover the parsing power evaluation to entire imaging system.
SFR is mainly used for measuring the caused influence increased with the lines of spatial frequency on single image.In brief SFR is exactly another test method of MTF, and this test method has largely simplified testing process.SFR's is final Calculating is desirable to obtain MTF curve.
Step 304 analyzes and processes the first image based on first object parameter and the second target component.
Wherein, step 304 be based on that first object parameter and the second target component analyze and process the first image can be with It is realized by image analysis equipment.In the MTF for the second area for obtaining the mtf value of the first area of the first image and the second image After value, two mtf values are compared, and the clarity of the first image is analyzed according to comparison result.
The image analysis method that the embodiment of the present invention is provided, obtaining the first image and the second image, the first image is Collected for target object using the image acquisition device of first terminal, the second image is the Image Acquisition using second terminal Device for target object it is collected, the acquisition angles of the first image and the second image are identical, later to the first image at Reason determines that first area simultaneously carries out the second image to handle determining second area, calculates the first object parameter and the of first area Second target component in two regions, first object parameter are used to indicate the clarity of the image of first area, the second target component Be used to indicate the clarity of the image of second area, be finally based on first object parameter and the second target component to the first image into Row analyzing and processing in this way, image analysis equipment can automatically compare collected two images, does not need to use manpower It compares, solves the problems, such as to have test in the test method of existing test machine camera inaccurate, improve test machine Camera test effect accuracy, avoid the waste of human resources;Also, enhance the intelligence degree of machine.
Based on previous embodiment, the embodiment of the present invention provides a kind of image analysis method, with reference to shown in Fig. 4, this method Include the following steps:
Step 401, image analysis equipment obtain the first image and the second image.
Wherein, the first image is to use the image acquisition device of first terminal collected for target object, the second image It is to use the image acquisition device of second terminal collected for target object, the acquisition angles phase of the first image and the second image Together.
Step 402, image analysis equipment carry out edge analysis using preset algorithm to the first image, obtain in the first image Target object profile first position.
Wherein, first position refers to the profile for the object that the first image includes the location of in the first image. In a kind of feasible realization method, first position can set the first image according to the distribution of the pixel in the first image It is set to a coordinate, obtains the coordinate position where the profile of object that the first image includes to obtain first position.For example, It is illustrated for shooting the first obtained image and the second image and being the image for including a dog:As shown in figure 5, first Position can be the position where the profile of dog.
Step 403, image analysis equipment are based on first position, and first area is determined from the first image.
Wherein, after the first position in obtaining the first image, it may be determined that the fringe region in the first image is in Heart district domain, and first area is determined from the first image based on the relationship between first position and fringe region and central area. For example, as shown in fig. 6, the region where frame A is expressed as an image, then first area can be frame 1, frame 2, frame 3, frame 4 and frame 5 correspond to the region in an image.
Step 404, image analysis equipment carry out edge analysis using preset algorithm to the second image, obtain in the second image Target object profile the second position.
Wherein, the second position refers to the profile for the object that the second image includes the location of in the first image. In a kind of feasible realization method, the second position can set the second image according to the distribution of the pixel in the second image It is set to a coordinate, obtains the coordinate position where the profile of object that the second image includes to obtain the second position.
Step 405, image analysis equipment are based on the second position, and second area is determined from the second image.
Wherein, after the second position in obtaining the second image, it may be determined that the fringe region in the second image is in Heart district domain, and second area is determined from the second image based on the relationship between the second position and fringe region and central area.
Step 406, image analysis equipment calculate the first object parameter of first area and the second target ginseng of second area Number.
Step 407, image analysis equipment obtain the magnitude relationship between first object parameter and the second target component.
Wherein, if first object parameter is the mtf value of first area, the second target component is the mtf value of second area, figure As analytical equipment compares the magnitude relationship of the mtf value in the corresponding each region of two images, if the mtf value of the first image is less than the The mtf value of two images illustrates the clarity of the first image not as good as the second image, just need at this time to the clarity of the first image into Row is adjusted;If the mtf value of the first image is greater than or equal to the mtf value of the second image, illustrate the clarity of the first image than second Image it is good, do not need to that the first image is adjusted.For example, to shoot obtained the first image and the second image to include It is illustrated for the image of one dog:As shown in fig. 7, the region a1 in the upper left corner in first image, upper right can be selected Region b1, lower left corner region c1 and the region d1 in the lower right corner at angle be first area, also, region a1, region b1, region c1 and All frame choosing has the profile of dog in the d1 of region;Meanwhile if the second area in the second image selected includes:The upper left of second image The region a2 at angle, the region b2 in the upper right corner, lower left corner region c2 and the region d2 in the lower right corner;At this point it is possible to the MTF by region a1 Value and the mtf value of region a2 are compared, and the mtf value of region b1 and the mtf value of region b2 are compared, by region c1's The mtf value of mtf value and region c2 is compared, and the mtf value of region d1 and the mtf value of region d2 are compared, to realize The comparison of the mtf value of one region and second area.
If step 408, first object parameter are less than the second target component, image analysis equipment determines the clear of the first image Degree does not meet preset requirement.
It should be noted that the explanation in the present embodiment with same steps in other embodiment or related notion is referred to Description in other embodiment, details are not described herein again.
The image analysis method that the embodiment of the present invention is provided, obtaining the first image and the second image, the first image is Collected for target object using the image acquisition device of first terminal, the second image is the Image Acquisition using second terminal Device for target object it is collected, the acquisition angles of the first image and the second image are identical, later to the first image at Reason determines that first area simultaneously carries out the second image to handle determining second area, calculates the first object parameter and the of first area Second target component in two regions, first object parameter are used to indicate the clarity of the image of first area, the second target component Be used to indicate the clarity of the image of second area, be finally based on first object parameter and the second target component to the first image into Row analyzing and processing in this way, image analysis equipment can automatically compare collected two images, does not need to use manpower It compares, solves the problems, such as to have test in the test method of existing test machine camera inaccurate, improve test machine Camera test effect accuracy, avoid the waste of human resources;Also, enhance the intelligence degree of machine.
Based on previous embodiment, the embodiment of the present invention provides a kind of image analysis method, with reference to shown in Fig. 8, this method Include the following steps:
Step 501, image analysis equipment obtain the first image and the second image.
Wherein, the first image is to use the image acquisition device of first terminal collected for target object, the second image It is to use the image acquisition device of second terminal collected for target object, the acquisition angles phase of the first image and the second image Together.
Step 502, image analysis equipment carry out edge analysis using preset algorithm to the first image, obtain in the first image Target object profile first position.
Step 503, image analysis equipment determine first edge region and the according to default determining method from the first image One central area.
It wherein, can be first according to preset at the first edge region and the first central area in determining the first image Size, it is first edge region that frame as shown in Figure 7, which selects the region of the correspondingly-sized at four angles of the first image, and then frame selects first The region of the correspondingly-sized of picture centre is the first central area.
Step 504, image analysis equipment determine whether include first position in first edge region and the first central area Any region at place.
Wherein, after frame selects to obtain first edge region and the first central area, judge in first edge region and first Whether region that the first position of first image covered is included in heart district domain;Can be to judge first edge when judgement Whether subregion in region that first position covered is included in region and the first central area;If the first side Edge region and the first central area include the subregion in the region that first position is covered, it may be determined that first edge area Domain and the first central area are first area.
If not including the subregion in the region that first position is covered in first edge region and the first central area, The size selected when just needing to expand initial at this time, later according to the size after expansion use with the size according to initial preset into Row frame choosing same procedure carries out frame choosing in the first image, until select first edge region and the first central area in extremely Include the subregion in the region that first position is covered less.
If step 505, first edge region and the first central area include any region where first position, image Analytical equipment determines first edge region and the first central area is first area.
Step 506, image analysis equipment carry out edge analysis using preset algorithm to the second image, obtain in the second image Target object profile the second position.
Step 507, image analysis equipment determine second edge region and the according to default determining method from the second image Two central areas.
Step 508, image analysis equipment determine whether include the second position in second edge region and the second central area Any region at place.
If step 509, second edge region and the second central area include any region where the second position, image Analytical equipment determines second edge region and the second central area is second area.
It should be noted that step 507-509 expressions is second area to be determined from the second image, and 507-508 Determine that the specific implementation process of second area is identical with the realization process in step 503-505, is referred to from the second image Explanation in step 503-505, details are not described herein again.
Step 510, image analysis equipment calculate the first object parameter of first area and the second target ginseng of second area Number.
Step 511, image analysis equipment obtain the magnitude relationship between first object parameter and the second target component.
If step 512, first object parameter are less than the second target component, image analysis equipment determines the clear of the first image Degree does not meet preset requirement.
Based on previous embodiment, in other embodiments of the invention, before obtaining the first image and the second image, the figure As analysis method can also include the following steps:
Step 513, image analysis equipment obtain motor when acquiring the first image and the second image in image acquisition device Position.
Wherein, before the first image and the second image is obtained, can in advance to the first image of acquisition and the second image when The focusing situation of image acquisition device is analyzed, it is ensured that the accuracy of collected first image and the second image.
It determines the focusing situation of image acquisition device when the first image of acquisition and the second image, can will acquire the first image It is compared with the position of the motor of camera during the second image with predeterminated position, if the difference of the position of motor and predeterminated position In default value, illustrate that camera focusing is accurate;If the position of motor and the difference of predeterminated position are beyond default value, explanation Camera focusing is inaccurate.Wherein, predeterminated position can be based on one obtained during actual use camera shooting image A position, can be by it is multigroup focusing it is accurate when motor position analyze after obtain;Also, the predeterminated position can To be motor corresponding position when camera focusing is accurate during history use.
The position of step 514, image analysis equipment based on motor determines that image acquisition device acquires the first image and the second figure Whether focusing is accurate during picture.
It should be noted that the explanation in the present embodiment with same steps in other embodiment or related notion is referred to Description in other embodiment, details are not described herein again.
The image analysis method that the embodiment of the present invention is provided can automatically carry out collected two images pair Than not needing to compare using manpower, solving and there is test inaccuracy in the test method of existing test machine camera Problem improves the accuracy of the test effect of the camera of test machine, avoids the waste of human resources;Also, it enhances The intelligence degree of machine.
Based on previous embodiment, the embodiment of the present invention provides a kind of image analysis equipment, which can be with In the image analysis method provided applied to Fig. 3~4 and 8 corresponding embodiments, with reference to shown in Fig. 9, which can To include:Processor 61, memory 62 and communication bus 63;
Communication bus 63 is used to implement the communication connection between processor 61 and memory 62;
Processor 61 is used to perform the image analysis program stored in memory 62, to realize following steps:
Obtain the first image and the second image.
Wherein, the first image is to use the image acquisition device of first terminal collected for target object, the second image It is to use the image acquisition device of second terminal collected for target object, the acquisition angles phase of the first image and the second image Together;
First image is carried out to handle determining first area, and the second image is carried out to handle determining second area;
Calculate the first object parameter of first area and the second target component of second area;
Wherein, first object parameter is used to indicate the clarity of the image of first area, and the second target component is used to indicate The clarity of the image of second area;
The first image is analyzed and processed based on first object parameter and the second target component.
In other embodiments of the invention, processor 61 is used to perform the image analysis program stored in memory 62, To realize following steps:
The first image is handled using preset algorithm, first area is determined from the first image;
The second image is handled using preset algorithm, second area is determined from the second image.
In other embodiments of the invention, processor 61 is used to perform the image analysis program stored in memory 62, To realize following steps:
Edge analysis is carried out to the first image using preset algorithm, obtains the of the profile of target object in the first image One position;
Based on first position information, first area is determined from the first image.
In other embodiments of the invention, processor 61 is used to perform the image analysis program stored in memory 62, To realize following steps:
According to default determining method, first edge region and the first central area are determined from the first image;
Determine whether include any region where first position in first edge region and the first central area;
If first edge region and the first central area include any region where first position, first edge is determined Region and the first central area are first area.
In other embodiments of the invention, processor 61 is used to perform the image analysis program stored in memory 62, To realize following steps:
Edge analysis is carried out to the second image using preset algorithm, obtains the of the profile of target object in the second image Two positions;
Based on second position information, second area is determined from the second image.
In other embodiments of the invention, processor 61 is used to perform the image analysis program stored in memory 62, To realize following steps:
According to default determining method, second edge region and the second central area are determined from the second image;
Determine whether include any region where the second position in second edge region and the second central area;
If second edge region and the second central area include any region where the second position, second edge is determined Region and the second central area are second area.
In other embodiments of the invention, processor 61 is used to perform the image analysis program stored in memory 62, To realize following steps:
Obtain the magnitude relationship between first object parameter and the second target component;
If first object parameter is less than the second target component, determine that the clarity of the first image does not meet preset requirement.
In other embodiments of the invention, processor 61 is used to perform the image analysis program stored in memory 62, To realize following steps:
The position of motor when obtaining the first image of acquisition and the second image in described image collector;
Position based on motor determines whether focusing is accurate when image acquisition device acquires the first image and the second image.
It should be noted that the specific implementation process of the step in the present embodiment performed by processor, be referred to Fig. 3~ Realization process in the image analysis method that 4 and 8 corresponding embodiments provide, details are not described herein again.
The image analysis equipment that the embodiment of the present invention is provided can automatically carry out collected two images pair Than not needing to compare using manpower, solving and there is test inaccuracy in the test method of existing test machine camera Problem improves the accuracy of the test effect of the camera of test machine, avoids the waste of human resources;Also, it enhances The intelligence degree of machine.
Based on previous embodiment, the embodiment of the present invention provides a kind of computer readable storage medium, computer-readable to deposit Storage media is stored with one or more program, one or more program can be performed by one or more processor, with reality Existing following steps:
Obtain the first image and the second image.
Wherein, the first image is to use the image acquisition device of first terminal collected for target object, the second image It is to use the image acquisition device of second terminal collected for target object, the acquisition angles phase of the first image and the second image Together;
First image is carried out to handle determining first area, and the second image is carried out to handle determining second area;
Calculate the first object parameter of first area and the second target component of second area;
Wherein, first object parameter is used to indicate the clarity of the image of first area, and the second target component is used to indicate The clarity of the image of second area;
The first image is analyzed and processed based on first object parameter and the second target component.
In other embodiments of the invention, one or more program can be performed by one or more processor, with Realize following steps:
The first image is handled using preset algorithm, first area is determined from the first image;
The second image is handled using preset algorithm, second area is determined from the second image.
In other embodiments of the invention, one or more program can be performed by one or more processor, with Realize following steps:
Edge analysis is carried out to the first image using preset algorithm, obtains the of the profile of target object in the first image One position;
Based on first position information, first area is determined from the first image.
In other embodiments of the invention, one or more program can be performed by one or more processor, with Realize following steps:
According to default determining method, first edge region and the first central area are determined from the first image;
Determine whether include any region where first position in first edge region and the first central area;
If first edge region and the first central area include any region where first position, first edge is determined Region and the first central area are first area.
In other embodiments of the invention, one or more program can be performed by one or more processor, with Realize following steps:
Edge analysis is carried out to the second image using preset algorithm, obtains the of the profile of target object in the second image Two positions;
Based on second position information, second area is determined from the second image.
In other embodiments of the invention, one or more program can be performed by one or more processor, with Realize following steps:
According to default determining method, second edge region and the second central area are determined from the second image;
Determine whether include any region where the second position in second edge region and the second central area;
If second edge region and the second central area include any region where the second position, second edge is determined Region and the second central area are second area.
In other embodiments of the invention, one or more program can be performed by one or more processor, with Realize following steps:
Obtain the magnitude relationship between first object parameter and the second target component;
If first object parameter is less than the second target component, determine that the clarity of the first image does not meet preset requirement.
In other embodiments of the invention, one or more program can be performed by one or more processor, with Realize following steps:
The position of motor when obtaining the first image of acquisition and the second image in described image collector;
Position based on motor determines whether focusing is accurate when image acquisition device acquires the first image and the second image.
It should be noted that the specific implementation process of the step in the present embodiment performed by processor, be referred to Fig. 3~ Realization process in the image analysis method that 4 and 8 corresponding embodiments provide, details are not described herein again.
Wherein, the memory 62 in the embodiment corresponding with Fig. 9 of memory 109 in the corresponding embodiments of Fig. 1 of the present invention Unanimously, the processor 61 in the embodiment corresponding with Fig. 9 of processor 110 in the corresponding embodiments of Fig. 1 is consistent.
It should be noted that above computer readable storage medium storing program for executing can be read-only memory (Read Only Memory, ROM), programmable read only memory (Programmable Read-Only Memory, PROM), erasable programmable is read-only deposits Reservoir (Erasable Programmable Read-Only Memory, EPROM), electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), magnetic random access store Device (Ferromagnetic Random Access Memory, FRAM), flash memory (Flash Memory), magnetic surface are deposited The memories such as reservoir, CD or CD-ROM (Compact Disc Read-Only Memory, CD-ROM);It can also be packet The various electronic equipments for including one of above-mentioned memory or arbitrarily combining, such as mobile phone, computer, tablet device, individual digital Assistant etc..
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row His property includes, so that process, method, article or device including a series of elements not only include those elements, and And it further includes other elements that are not explicitly listed or further includes intrinsic for this process, method, article or device institute Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including this Also there are other identical elements in the process of element, method, article or device.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on such understanding, technical scheme of the present invention substantially in other words does the prior art Going out the part of contribution can be embodied in the form of software product, which is stored in a storage medium In (such as ROM/RAM, magnetic disc, CD), used including some instructions so that a station terminal equipment (can be mobile phone, computer takes Business device, air conditioner or the network equipment etc.) perform each described method of embodiment of the present invention.
The present invention be with reference to according to the method for the embodiment of the present invention, the flow of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that it can be realized by computer program instructions every first-class in flowchart and/or the block diagram The combination of flow and/or box in journey and/or box and flowchart and/or the block diagram.These computer programs can be provided The processor of all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices is instructed to produce A raw machine so that the instruction performed by computer or the processor of other programmable data processing devices is generated for real The device of function specified in present one flow of flow chart or one box of multiple flows and/or block diagram or multiple boxes.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works so that the instruction generation being stored in the computer-readable memory includes referring to Enable the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one box of block diagram or The function of being specified in multiple boxes.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that counted Series of operation steps are performed on calculation machine or other programmable devices to generate computer implemented processing, so as in computer or The instruction offer performed on other programmable devices is used to implement in one flow of flow chart or multiple flows and/or block diagram one The step of function of being specified in a box or multiple boxes.
It these are only the preferred embodiment of the present invention, be not intended to limit the scope of the invention, it is every to utilize this hair The equivalent structure or equivalent flow shift that bright specification and accompanying drawing content are made directly or indirectly is used in other relevant skills Art field, is included within the scope of the present invention.

Claims (10)

1. a kind of image analysis method, which is characterized in that the method includes:
Obtain the first image and the second image;Wherein, described first image is to be directed to mesh using the image acquisition device of first terminal Mark object is collected, and second image is to be collected using the image acquisition device of second terminal for the target object , described first image is identical with the acquisition angles of second image;
Described first image is carried out to handle determining first area, and second image is carried out handling determining second area;
Calculate the first object parameter of the first area and the second target component of the second area;Wherein, described first Target component is used to indicate the clarity of the image of the first area, and second target component is used to indicate secondth area The clarity of the image in domain;
Described first image is analyzed and processed based on the first object parameter and second target component.
2. according to the method described in claim 1, it is characterized in that, described carry out described first image in determining firstth area of processing Domain, and second image is carried out handling determining second area, including:
Described first image is handled using preset algorithm, the first area is determined from described first image;
Second image is handled using the preset algorithm, the second area is determined from second image.
3. according to the method described in claim 2, it is characterized in that, it is described using preset algorithm to described first image at Reason determines the first area from described first image, including:
Edge analysis is carried out to described first image using preset algorithm, obtains the target object in described first image The first position of profile;
Based on the first position information, the first area is determined from described first image.
4. according to the method described in claim 3, it is characterized in that, described be based on the first position information, from described first The first area is determined in image, including:
According to default determining method, first edge region and the first central area are determined from described first image;
Determine whether include any region where the first position in the first edge region and the first central area;
If the first edge region and the first central area include any region where the first position, determine described First edge region and first central area are the first area.
5. according to the method in claim 2 or 3, which is characterized in that described to use the preset algorithm to second figure As being handled, the second area is determined from second image, including:
Edge analysis is carried out to second image using preset algorithm, obtains the target object in second image The second position of profile;
Based on the second position information, the second area is determined from second image.
6. according to the method described in claim 5, it is characterized in that, described be based on the second position information, from described second The second area is determined in image, including:
According to default determining method, second edge region and the second central area are determined from second image;
Determine whether include any region where the second position in the second edge region and the second central area;
If the second edge region and the second central area include any region where the second position, determine described Second edge region and second central area are the second area.
7. according to the method described in claim 1, it is characterized in that, described be based on the first object parameter and second mesh Mark parameter analyzes and processes described first image, including:
Obtain the magnitude relationship between the first object parameter and second target component;
If the first object parameter be less than second target component, determine described first image clarity do not meet it is default It is required that.
8. according to the method described in claim 1, it is characterized in that, described be directed to using the image acquisition device acquisition of first terminal First image of target object, and using second terminal image acquisition device acquisition for the target object the second image it Before, it further includes:
The position of motor when obtaining acquisition described first image and second image in described image collector;
Based on the position of the motor, determine to focus when described image collector acquisition described first image and second image It is whether accurate.
9. a kind of image analysis equipment, which is characterized in that the equipment includes:Processor, memory and communication bus;
The communication bus is used to implement the communication connection between the processor and the memory;
The processor is used to perform the image analysis program stored in the memory, to realize following steps:
Obtain the first image and the second image;Wherein, described first image is to be directed to mesh using the image acquisition device of first terminal Mark object is collected, and second image is to be collected using the image acquisition device of second terminal for the target object , described first image is identical with the acquisition angles of second image;
Described first image is carried out to handle determining first area, and second image is carried out handling determining second area;
Calculate the first object parameter of the first area and the second target component of the second area;Wherein, described first Target component is used to indicate the clarity of the image of the first area, and second target component is used to indicate secondth area The clarity of the image in domain;
Described first image is analyzed and processed based on the first object parameter and second target component.
10. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage there are one or Multiple programs, one or more of programs can be performed by one or more processor, to realize such as claim 1 to 8 Any one of described in image analysis method the step of.
CN201810005265.9A 2018-01-03 2018-01-03 A kind of image analysis method, equipment and computer readable storage medium Pending CN108230312A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810005265.9A CN108230312A (en) 2018-01-03 2018-01-03 A kind of image analysis method, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810005265.9A CN108230312A (en) 2018-01-03 2018-01-03 A kind of image analysis method, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN108230312A true CN108230312A (en) 2018-06-29

Family

ID=62645064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810005265.9A Pending CN108230312A (en) 2018-01-03 2018-01-03 A kind of image analysis method, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108230312A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109446887A (en) * 2018-09-10 2019-03-08 易诚高科(大连)科技有限公司 It is a kind of for picture quality subjectivity evaluation and test image scene generation method is described
CN109558829A (en) * 2018-11-26 2019-04-02 浙江大华技术股份有限公司 A kind of lane change detection method and device, computer installation and readable storage medium storing program for executing
CN112863408A (en) * 2019-11-26 2021-05-28 逸美德科技股份有限公司 Screen resolution detection method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101685240A (en) * 2008-09-26 2010-03-31 致伸科技股份有限公司 Method for judging focusing quality of image extracting device
US20150187133A1 (en) * 2012-07-30 2015-07-02 Sony Computer Entertainment Europe Limited Localisation and mapping
CN105381966A (en) * 2015-12-14 2016-03-09 芜湖恒信汽车内饰制造有限公司 Assembly line picture comparison detection device
CN106385579A (en) * 2016-09-12 2017-02-08 努比亚技术有限公司 Camera detection device, method and multi-camera terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101685240A (en) * 2008-09-26 2010-03-31 致伸科技股份有限公司 Method for judging focusing quality of image extracting device
US20150187133A1 (en) * 2012-07-30 2015-07-02 Sony Computer Entertainment Europe Limited Localisation and mapping
CN105381966A (en) * 2015-12-14 2016-03-09 芜湖恒信汽车内饰制造有限公司 Assembly line picture comparison detection device
CN106385579A (en) * 2016-09-12 2017-02-08 努比亚技术有限公司 Camera detection device, method and multi-camera terminal

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109446887A (en) * 2018-09-10 2019-03-08 易诚高科(大连)科技有限公司 It is a kind of for picture quality subjectivity evaluation and test image scene generation method is described
CN109446887B (en) * 2018-09-10 2022-03-25 易诚高科(大连)科技有限公司 Image scene description generation method for subjective evaluation of image quality
CN109558829A (en) * 2018-11-26 2019-04-02 浙江大华技术股份有限公司 A kind of lane change detection method and device, computer installation and readable storage medium storing program for executing
CN112863408A (en) * 2019-11-26 2021-05-28 逸美德科技股份有限公司 Screen resolution detection method and device

Similar Documents

Publication Publication Date Title
CN108108704A (en) Face identification method and mobile terminal
CN108196778A (en) Control method, mobile terminal and the computer readable storage medium of screen state
CN109167910A (en) focusing method, mobile terminal and computer readable storage medium
CN107705251A (en) Picture joining method, mobile terminal and computer-readable recording medium
CN108063901A (en) A kind of image-pickup method, terminal and computer readable storage medium
CN106648118A (en) Virtual teaching method based on augmented reality, and terminal equipment
CN107832784A (en) A kind of method of image beautification and a kind of mobile terminal
CN107493426A (en) A kind of information collecting method, equipment and computer-readable recording medium
CN107566635A (en) Screen intensity method to set up, mobile terminal and computer-readable recording medium
CN107959795A (en) A kind of information collecting method, equipment and computer-readable recording medium
CN108196922A (en) A kind of method, terminal and computer readable storage medium for opening application
CN108182028A (en) A kind of control method, terminal and computer readable storage medium
CN110086993A (en) Image processing method, device, mobile terminal and computer readable storage medium
CN108171743A (en) Method, equipment and the computer for shooting image can storage mediums
CN108174093A (en) Method, equipment and the computer of moment image can storage mediums
CN108230270A (en) A kind of noise-reduction method, terminal and computer readable storage medium
CN107704514A (en) A kind of photo management method, device and computer-readable recording medium
CN107580181A (en) A kind of focusing method, equipment and computer-readable recording medium
CN108230312A (en) A kind of image analysis method, equipment and computer readable storage medium
CN109934769A (en) Method, terminal and the storage medium of the long screenshot of screen
CN109218538A (en) Mobile terminal screen control method, mobile terminal and computer readable storage medium
CN108230104A (en) Using category feature generation method, mobile terminal and readable storage medium storing program for executing
CN107255812A (en) Speed-measuring method, mobile terminal and storage medium based on 3D technology
CN108012029A (en) A kind of information processing method, equipment and computer-readable recording medium
CN108052985A (en) Information collecting method, information acquisition terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180629

RJ01 Rejection of invention patent application after publication