CN103295238A - ROI (region of interest) motion detection based real-time video positioning method for Android platform - Google Patents

ROI (region of interest) motion detection based real-time video positioning method for Android platform Download PDF

Info

Publication number
CN103295238A
CN103295238A CN2013102196835A CN201310219683A CN103295238A CN 103295238 A CN103295238 A CN 103295238A CN 2013102196835 A CN2013102196835 A CN 2013102196835A CN 201310219683 A CN201310219683 A CN 201310219683A CN 103295238 A CN103295238 A CN 103295238A
Authority
CN
China
Prior art keywords
roi
frame
character
real
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013102196835A
Other languages
Chinese (zh)
Other versions
CN103295238B (en
Inventor
顾韵华
陈培培
张俊勇
高宝
朱节中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Bohui Technology Inc
Suzhou High Airlines Intellectual Property Rights Operation Co ltd
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN201310219683.5A priority Critical patent/CN103295238B/en
Publication of CN103295238A publication Critical patent/CN103295238A/en
Application granted granted Critical
Publication of CN103295238B publication Critical patent/CN103295238B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an ROI (region of interest) motion detection based real-time video positioning method for the Android platform. Data captured by equipment is subjected to real-time data conversion and image preprocessing by an image processing algorithm; motion amplitude of mobile equipment is calculated by a ROI motion detection algorithm; repeated character positioning is omitted for video frames which have little movements. Therefore, efficiency of real-time character positioning is improved while character positioning accuracy is guaranteed. According to similarity and continuity of the video frames, change of information quantity is calculated for ROI of the adjacent video frames, motion detection is performed, repeated positioning of similar characters is omitted, and efficiency in character positioning is significantly improved. In addition, the method has the advantages that program running efficiency is improved, positioning timeliness can be improved effectively, and the method is especially applicable to printed character positioning under simple scenarios.

Description

On the Android platform based on the real-time video localization method of ROI motion detection
Technical field
The invention belongs to technical field of image processing, relate to a kind of real-time video localization method, more specifically, relate to a kind of real-time video localization method based on the ROI motion detection that is applied to the Android platform.
Background technology
Along with the arrival in 3G epoch, mobile terminal device has obtained high speed development in recent years, and all kinds of intelligent terminal operating systems are also arisen at the historic moment thereupon, and Android (Android) operating system is one of them.Android is as first complete, open, free cell phone platform, the market share of having captured operation system of smart phone rapidly.Based on the Android system, large quantities of applied softwares emerge in an endless stream.
The user utilizes mobile device can conveniently take digitized image in the natural scene.Image in the natural scene is the same with other man-made structures, is comprising important Word message, and is significant for the content that helps people to obtain and to understand in the natural scene.For the ease of browsing, manage and understanding the content that image or video comprise, just need handle and deep understanding shot digital images, thereby promoted analysis and the research of people for digitized video, picture material, character recognition is wherein a kind of.In natural scene, detect the identification literal line and can satisfy as blind person's (text detection is identified as voice), driver's specific demands such as (detecting the traffic indication board content), have very high researching value.And in character recognition process, the accurate location of how to carry out character then is the most key basic steps.
Some character recognition programs at a peacekeeping two-dimensional bar have appearred in the Android system at present, because in actual applications, the mobile device of using the Android system is generally hand-held by the user, therefore be in the middle of the movement usually, existing character recognition program is when identifying, need reorientate character, and the mobile range of mobile device is often very little, it only is slight shake, if the character in each two field picture is reorientated, can waste many calculation resources, reduce the character speed of location in real time greatly.
Summary of the invention
For addressing the above problem, the invention discloses on a kind of Android platform the real-time video localization method based on the ROI motion detection, use image processing algorithm that the equipment video captured is carried out real time data conversion and image pre-service, combination is based on the ROI motion detection algorithm again, calculate the mobile range of mobile device, omit the repeat character (RPT) location process for the less frame of video of mobile range, under the prerequisite that guarantees the character locating accuracy rate, improved the character efficient of location in real time.
In order to achieve the above object, the invention provides following technical scheme:
Based on the real-time video localization method of ROI motion detection, comprise the steps: on a kind of Android platform
Step 10: with the real-time transfer algorithm of original YUV420 format video data stream by YUV and RGB, change into the video frame images of rgb format;
Step 20: the video frame images to described rgb format carries out pre-service, and described preprocessing process comprises gray processing, binaryzation and edge detection process;
Step 30: adopt the ROI method for testing motion that each two field picture is detected, the variation of calculating the consecutive frame state comes the mobile range of tracking equipment, when motion amplitude is little between the consecutive frame, continues to use the character locating result of former frame; When motion amplitude between the consecutive frame is big, back one frame is carried out character locating again.
As a preferred embodiment of the present invention, in the described step 20, the gray processing method adopts the weighted mean value method, and binarization method adopts the OSTU method to calculate binary-state threshold, and described rim detection adopts the Canny edge detection algorithm.
As a preferred embodiment of the present invention, described ROI method for testing motion comprises the steps:
Step 301: initial frame is carried out the character zone location, and the positional information of the results area of location is designated as the positional information of the area-of-interest of second frame;
Step 302: calculate the quantity of information of consecutive frame area-of-interest respectively, and calculate the absolute value of the quantity of information difference of consecutive frame area-of-interest;
Step 303: when quantity of information difference in the step 302 during greater than the information gap threshold value, this frame video is carried out character locating again, when the quantity of information difference is not more than the information gap threshold value, then continue to use former frame character locating result; Continue execution in step 302.
As a preferred embodiment of the present invention, the described process that character is positioned comprises the steps:
Step 401: the edge detection results that needs position is carried out the morphology expansion process;
Step 402: the connected domain in the image after 401 step process is screened according to predefined screening rule, obtain character zone information, and cut the position to the maximum boundary rectangle of the connected domain that filters out in binary image, obtains the result of character locating cutting.
As a preferred embodiment of the present invention, described quantity of information is black pixel value, and concrete computing method are: scan area-of-interest in the bianry image dot matrix, the gray-scale value that adds up is 0 counts.
Compared with prior art, real-time video character locating method based on the ROI motion detection provided by the invention, utilize similarity and successional characteristics between the frame of video, to video consecutive frame area-of-interest computing information quantitative changeization, carry out motion detection, omit the resetting process of identical characters, the efficient of character locating is had significantly improve.In addition, the present invention is directed to the limitation of Android mobile device processing power, with the local Development Framework of Android the complex image processing process is realized with native language C++, improved the efficient of program operation.Write the method for all making localization process with every frame video with respect to simple use Java language, can effectively improve the real-time of location, be particularly suitable for handling the block letter multiword symbol location under the simple scenario.
Description of drawings
Fig. 1 is video character real-time location method flow chart of steps provided by the invention;
Fig. 2 is the processing flow chart of ROI method for testing motion concrete in the embodiment step 30;
Fig. 3 is the concrete processing flow chart of character locating method;
Fig. 4 is the image that carries out after gray processing is handled;
Fig. 5 is the image that carries out after the binary conversion treatment;
Fig. 6 is the image that carries out after the edge detection process;
Fig. 7 is the image that carries out after the morphology expansion process;
Fig. 8 is the screening rule of connected domain;
Fig. 9 is for screening the image after the cutting according to connected domain;
Figure 10 is the performance test comparative result figure of embodiment.
Embodiment
Below with reference to specific embodiment technical scheme provided by the invention is elaborated, should understands following embodiment and only be used for explanation the present invention and be not used in and limit the scope of the invention.
When carrying out character locating, this method is at first obtained the current preview frame data that the Android handheld device is gathered, carrying out further image processing to obtaining data, as shown in Figure 1, comprise the steps that specifically the width of cloth that present embodiment adopts the ThinkPad Tablet183823C of association to catch comprises the coloured image of character to be handled as original image:
Step 10 is converted to rgb format with the video flowing of YUV420 standard format, and rgb format is easier to carry out image and handles, and it is as follows to calculate video image RGB component formula by YUV (being YCrCb) three-component:
R=1.164*(Y-16)+1.596*(Cr-128)
G=1.164*(Y-16)-0.813*(Cr-128)-0.392*(Cb-128) (1-1)
B=1.164*(Y-16)+2.017*(Cb-128)
Wherein, Y represents lightness, and Cr and Cb are colourity, has defined two aspects of color respectively, i.e. tone and saturation degree.
Step 20 uses gray processing, binarization method and edge detection method that every two field picture in the video is carried out pre-service, wherein binarization method adopts the OSTU method to calculate binary-state threshold, and rim detection adopts the Canny edge detection algorithm that image is carried out profile and extracts.Through after the pre-service, can access the character zone evident characteristic and handle image.
Specifically, the concrete steps of step 20 are as follows:
Frame of video after the step 201 pair format conversion is carried out the gray processing processing, is about to colored RGB image and is converted into gray level image.The weighted mean value method is preferably adopted in the calculating of gray-scale value.R, G, B component to RGB are given different weights W R, W G, W B, get their weighted mean value again, be formulated as:
g = W R * R + W G * G + W B * B 3 - - - ( 2 - 1 )
Usually, for three kinds of colors of red, green, blue, human eye is the highest to the susceptibility of green, and redness is taken second place, and is blue minimum, therefore, chooses W in this example R=0.299, W G=0.587, W B=0.114.The result of gray processing as shown in Figure 4.
Image behind the step 202 pair gray processing carries out binary conversion treatment.If certain any coordinate is in the gray level image (x, y), G={0,1 ..., 255}, G are 0 to 255 integer, i.e. tonal range, g (x, y) expression (x, the grey scale pixel value of y) locating.Get gray-scale value t as threshold value (t ∈ G), then be divided into greater than threshold value t with less than two parts of threshold value t according to the pixel in big young pathbreaker's gray-scale map of threshold value.Definite employing maximum variance between clusters (OTSU) of threshold value t, algorithm is slit into two groups with image in a certain gray-scale value punishment, respectively corresponding background parts and prospect part (character part).If the probability of the appearance of gradation of image value i (0≤i≤255) in image is Pi, the global threshold gray scale is t; Pixel segmentation in the image is become two classes, and namely gray scale is smaller or equal to the background classes A=[0 of threshold value, and 1 ..., t] and gray scale greater than the prospect class B=[t+1 of threshold value, t+2 ..., 255], the probability that background classes and prospect class occur is respectively P A, P B, the gray average ω of the two then AAnd ω BBe described as respectively:
ω A = Σ i = 0 t i * P i / P A , ω B = Σ i = t 255 i * P i / P B - - - ( 2 - 2 )
The total gray average of image is:
ω 0 = P A * ω A + P B * ω B = Σ i = 0 255 i * P i - - - ( 2 - 3 )
The inter-class variance that can get the AB zone thus is:
σ 2=P A*(ω A0) 2+P B*(ω B0) 2 (2-4)
Threshold value t is traveled through between tonal range 0~255, is that the value of corresponding t was the threshold value of getting when variance was maximum between A, category-B when σ in the formula (2-4) obtains maximal value.
The binaryzation formula is:
b ( x , y ) = 1 g ( x , y ) &GreaterEqual; t 0 g ( x , y ) < t - - - ( 2 - 5 )
Wherein, (x y) is pixel value after the binaryzation to b.After the OTSU binaryzation, image effect as shown in Figure 5.
Image after the step 203 pair binaryzation carries out rim detection.Adopt the Canny rim detection, i.e. Zui You notch cuttype edge detection algorithm.Algorithm adopts Gauss's single order differential to come the Grad of computed image, by seeking the local maximum of image gradient, obtain intensity and the direction of image border, strong, the weak edge by dual threshold method detected image is then exported it when strong edge and weak edge are connected to form profile.Core procedure comprises following:
(1) noise in the removal image adopts Gaussian filter that image is carried out smoothing processing;
(2) ask the gradient of gradation of image value, comprise amplitude and direction, adopt the finite difference of single order partial derivative to calculate usually;
(3) local maximum of the gradient magnitude of searching gradation of image;
(4) select two threshold values, obtain the roughly edge of image by high threshold, collect the new edge of connection layout picture by low threshold value, solve not closed-ended question of edge.
The result who carries out the Canny rim detection by above step as shown in Figure 6.
Step 30 adopts the ROI method for testing motion that each two field picture is detected, the variation of calculating the consecutive frame state comes the mobile range of tracking equipment, whether the mobile range of judgment device is bigger, when motion amplitude is little between the consecutive frame, then can continue to use the character locating result of former frame, need not back one frame character is reorientated; When motion amplitude between the consecutive frame is big, then need back one frame is carried out character locating again, i.e. fresh character region ROI more.The back motion amplitude compared with former frame of one frame is judged by contrasting in two frames black picture element value difference among the ROI.By above-mentioned steps, each frame in the traversing graph picture need not the little frame of mobile range is reorientated like this, has obviously promoted location efficiency.
Concrete ROI method for testing motion treatment scheme as shown in Figure 2, concrete steps are as follows:
Step 301 pair initial frame carries out character locating, and the positional information of the results area of location is designated as the positional information of the area-of-interest of second frame.The character locating result who records first frame is rectangular area Rect 1=F 1(x 1, y 1, w 1, h 1), (x wherein 1, y 1) be the coordinate figure of the rectangle upper left corner in image, w 1Be the width of rectangle, h 1It is the height of rectangle.If the i two field picture is F i, F then iCharacter locating result be rectangular area Rect i=F i(x i, y i, w i, h i), (x wherein i, y i) be the coordinate figure of the rectangle upper left corner in image, w iBe the width of rectangle, h iBe the height of rectangle, define i+1 two field picture F I+1The ROI(area-of-interest) be Rect iDetermined zone is designated as M I+1, i.e. M I+1=F I+1(x i, y i, w i, h i), remember that the area-of-interest black pixel value is quantity of information D in the i frame video i, concrete computing method are: scan area-of-interest in the bianry image dot matrix, namely scan from [x i, y i] to [x i+ w i, y i+ h i] each point in the zone, the gray-scale value that adds up is 0 counts, this value is the quantity of information D of area-of-interest i
The absolute value of the difference of the quantity of information of the area-of-interest of step 302 calculating i frame and i+1 frame is ⊿, and whether Pan Duan ⊿ is greater than information gap threshold value d.Information gap threshold value d preferably gets 1% of image gross information content, i.e. d=M * N/100, M, N are respectively width and the height of image.
Step 303 Ruo ⊿〉d, carry out character locating again to this frame video.Ruo ⊿≤d shows it is identical character, need not resetting, and originally the character locating result of i+1 frame and quantity of information are continued to use the result of i frame, and concrete mode is: D I+1=D i, M I+1=M i, i=i+1.At last, turn to step 302, continue namely to judge whether the quantity of information of next frame and the quantity of information difference of current character locating area surpass threshold value d.
It is identical in the step 301 initial frame to be carried out in character locating and the step 303 video is carried out again the method for character locating, is the method that combines based on morphology and connected domain analysis.First expansive working with mathematical morphology is processed into character zone and is similar to the rectangular area, again the connected domain screening is carried out in above-mentioned similar rectangular area, finds out its corresponding minimum boundary rectangle, cuts, and obtains the result of character locating cutting.
Carry out the concrete treatment scheme of character locating as shown in Figure 3, step is as follows:
The edge detection results that needs in the step 401 pair step 30 to position is carried out the morphology expansion process.Pending image is X, chooses the square of structural element B(3 * 3), it is right that the point around the central point of B and the point on the X and the X is carried out slip factor one by one, if having a point to drop within the scope of X on the B, should be stain just then.Through effect behind the result of morphology expansion process as shown in Figure 7.
Connected domain in the image after the step 402 pair above-mentioned A step process is screened according to screening rule (this rule can be made amendment as required) as shown in Figure 8, obtain character zone information, and cut the position to the maximum boundary rectangle of the connected domain that filters out in the binary image after step 202 is handled, and obtains the result of character locating cutting.Generally speaking, same Android equipment shot picture size is similar substantially with Pixel Information, and the width of establishing original image is W, highly is H, and the width of the minimum boundary rectangle of character connected region is cW, highly is cH, and the area of connected domain is cA.The cutting result of character zone as shown in Figure 9.
Above-mentioned various images are handled and the character locating method, adopt the local exploitation of JNI(Java when handling) handle framework, complicated transforming process is write with native language (C++), improved the efficient of program.
Adopt the character locating method that adds motion detection in the present embodiment to carry out 10 groups of image character locating experiments, adopt the character locating method that does not add the motion detection mode (namely each two field picture all to be positioned again, the remaining image disposal route is identical with present embodiment) carry out 10 image character positioning experiments, with above two kinds of methods as a comparison case, its performance test and comparative result are as shown in figure 10.Horizontal ordinate represents that the group number of testing, ordinate are illustrated in the front and back that add the ROI motion detection, and every frame video is located handled averaging time, and unit is millisecond (ms).Add about 90ms of processing time of the average every frame video of character locating of ROI motion detection step, do not compare with adding the ROI motion detection, processing speed has improved about 40%.
The disclosed technological means of the present invention program is not limited only to the disclosed technological means of above-mentioned embodiment, also comprises the technical scheme of being made up of above technical characterictic combination in any.

Claims (5)

  1. On the Android platform based on the real-time video localization method of ROI motion detection, it is characterized in that, comprise the steps:
    Step 10: with the real-time transfer algorithm of original YUV420 format video data stream by YUV and RGB, change into the video frame images of rgb format;
    Step 20: the video frame images to described rgb format carries out pre-service, and described preprocessing process comprises gray processing, binaryzation and edge detection process;
    Step 30: adopt the ROI method for testing motion that each two field picture is detected, the variation of calculating the consecutive frame state comes the mobile range of tracking equipment, when motion amplitude is little between the consecutive frame, continues to use the character locating result of former frame; When motion amplitude between the consecutive frame is big, back one frame is carried out character locating again.
  2. On the Android platform according to claim 1 based on the real-time video localization method of ROI motion detection, it is characterized in that: in the described step 20, the gray processing method adopts the weighted mean value method, binarization method adopts the OSTU method to calculate binary-state threshold, and described rim detection adopts the Canny edge detection algorithm.
  3. On the Android platform according to claim 1 and 2 based on the real-time video localization method of ROI motion detection, it is characterized in that described ROI method for testing motion comprises the steps:
    Step 301: initial frame is carried out the character zone location, and the positional information of the results area of location is designated as the positional information of the area-of-interest of second frame;
    Step 302: calculate the quantity of information of consecutive frame area-of-interest respectively, and calculate the absolute value of the quantity of information difference of consecutive frame area-of-interest;
    Step 303: when quantity of information difference in the step 302 during greater than the information gap threshold value, this frame video is carried out character locating again, when the quantity of information difference is not more than the information gap threshold value, then continue to use former frame character locating result; Continue execution in step 302.
  4. On the Android platform according to claim 3 based on the real-time video localization method of ROI motion detection, it is characterized in that the described process that character is positioned comprises the steps:
    Step 401: the edge detection results that needs position is carried out the morphology expansion process;
    Step 402: the connected domain in the image after 401 step process is screened according to predefined screening rule, obtain character zone information, and cut the position to the maximum boundary rectangle of the connected domain that filters out in binary image, obtains the result of character locating cutting.
  5. On the Android platform according to claim 3 based on the real-time video localization method of ROI motion detection, it is characterized in that, described quantity of information is black pixel value, and concrete computing method are: scan area-of-interest in the bianry image dot matrix, the gray-scale value that adds up is 0 counts.
CN201310219683.5A 2013-06-03 2013-06-03 Video real-time location method based on ROI motion detection on Android platform Active CN103295238B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310219683.5A CN103295238B (en) 2013-06-03 2013-06-03 Video real-time location method based on ROI motion detection on Android platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310219683.5A CN103295238B (en) 2013-06-03 2013-06-03 Video real-time location method based on ROI motion detection on Android platform

Publications (2)

Publication Number Publication Date
CN103295238A true CN103295238A (en) 2013-09-11
CN103295238B CN103295238B (en) 2016-08-10

Family

ID=49096043

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310219683.5A Active CN103295238B (en) 2013-06-03 2013-06-03 Video real-time location method based on ROI motion detection on Android platform

Country Status (1)

Country Link
CN (1) CN103295238B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408577A (en) * 2016-09-21 2017-02-15 安徽慧视金瞳科技有限公司 Successive frame connected domain parallel tagging method used for projection interactive system
CN106991441A (en) * 2017-03-30 2017-07-28 浙江科技学院 Merge the plant specimen sorting technique and system of multiple dimensioned direction textural characteristics
CN109635957A (en) * 2018-11-13 2019-04-16 广州裕申电子科技有限公司 A kind of equipment maintenance aid method and system based on AR technology
CN112541497A (en) * 2019-09-04 2021-03-23 天津科技大学 Measuring rod image monitoring system for android mine based on JNI technology
WO2022226732A1 (en) * 2021-04-26 2022-11-03 华为技术有限公司 Electronic apparatus, and image processing method of electronic apparatus

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101620680A (en) * 2008-07-03 2010-01-06 三星电子株式会社 Recognition and translation method of character image and device
US20110161076A1 (en) * 2009-12-31 2011-06-30 Davis Bruce L Intuitive Computing Methods and Systems

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101620680A (en) * 2008-07-03 2010-01-06 三星电子株式会社 Recognition and translation method of character image and device
US20110161076A1 (en) * 2009-12-31 2011-06-30 Davis Bruce L Intuitive Computing Methods and Systems

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵欣: "行人安全状态检测系统研究", 《万方学位论文数据库》, 15 February 2011 (2011-02-15) *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408577A (en) * 2016-09-21 2017-02-15 安徽慧视金瞳科技有限公司 Successive frame connected domain parallel tagging method used for projection interactive system
CN106408577B (en) * 2016-09-21 2019-12-31 安徽慧视金瞳科技有限公司 Continuous frame connected domain parallel marking method for projection interactive system
CN106991441A (en) * 2017-03-30 2017-07-28 浙江科技学院 Merge the plant specimen sorting technique and system of multiple dimensioned direction textural characteristics
CN109635957A (en) * 2018-11-13 2019-04-16 广州裕申电子科技有限公司 A kind of equipment maintenance aid method and system based on AR technology
CN112541497A (en) * 2019-09-04 2021-03-23 天津科技大学 Measuring rod image monitoring system for android mine based on JNI technology
WO2022226732A1 (en) * 2021-04-26 2022-11-03 华为技术有限公司 Electronic apparatus, and image processing method of electronic apparatus
EP4297397A4 (en) * 2021-04-26 2024-04-03 Huawei Technologies Co., Ltd. Electronic apparatus, and image processing method of electronic apparatus

Also Published As

Publication number Publication date
CN103295238B (en) 2016-08-10

Similar Documents

Publication Publication Date Title
CN105608456B (en) A kind of multi-direction Method for text detection based on full convolutional network
CN104794479B (en) This Chinese detection method of natural scene picture based on the transformation of local stroke width
CN103714537B (en) Image saliency detection method
CN105046206B (en) Based on the pedestrian detection method and device for moving prior information in video
CN108268527B (en) A method of detection land use pattern variation
CN104166841A (en) Rapid detection identification method for specified pedestrian or vehicle in video monitoring network
CN103020965A (en) Foreground segmentation method based on significance detection
CN109919002B (en) Yellow stop line identification method and device, computer equipment and storage medium
CN107066972B (en) Natural scene Method for text detection based on multichannel extremal region
CN106203237A (en) The recognition methods of container-trailer numbering and device
CN103295238A (en) ROI (region of interest) motion detection based real-time video positioning method for Android platform
CN102393966A (en) Self-adapting image compressive sampling method based on multi-dimension saliency map
CN112883926B (en) Identification method and device for form medical images
CN110930384A (en) Crowd counting method, device, equipment and medium based on density information
CN107944437A (en) A kind of Face detection method based on neutral net and integral image
CN106156691A (en) The processing method of complex background image and device thereof
CN102509308A (en) Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection
CN112581495A (en) Image processing method, device, equipment and storage medium
CN110188661A (en) Boundary Recognition method and device
CN109492573A (en) A kind of pointer read method and device
CN102682291B (en) A kind of scene demographic method, device and system
CN106650824B (en) Moving object classification method based on support vector machines
CN108109125A (en) Information extracting method and device based on remote sensing images
Zhu et al. Color-geometric model for traffic sign recognition
CN106228553A (en) High-resolution remote sensing image shadow Detection apparatus and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20190312

Address after: 100089 5-storey 501, No. 7 (Incubation Building) Fengxian Middle Road, Haidian District, Beijing

Patentee after: BEIJING BOHUI TECHNOLOGY Inc.

Address before: Room 602, Building No. 278 East Suzhou Avenue, Suzhou Industrial Park, Jiangsu Province

Patentee before: Suzhou high Airlines intellectual property rights Operation Co.,Ltd.

Effective date of registration: 20190312

Address after: Room 602, Building No. 278 East Suzhou Avenue, Suzhou Industrial Park, Jiangsu Province

Patentee after: Suzhou high Airlines intellectual property rights Operation Co.,Ltd.

Address before: 210019 No. 69 Olympic Sports Street, Nanjing, Jiangsu Province

Patentee before: Nanjing University of Information Science and Technology

TR01 Transfer of patent right