CN104881149A - Input method and device based on video stream - Google Patents

Input method and device based on video stream Download PDF

Info

Publication number
CN104881149A
CN104881149A CN201510354200.1A CN201510354200A CN104881149A CN 104881149 A CN104881149 A CN 104881149A CN 201510354200 A CN201510354200 A CN 201510354200A CN 104881149 A CN104881149 A CN 104881149A
Authority
CN
China
Prior art keywords
characteristic portion
movement locus
coded message
character
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510354200.1A
Other languages
Chinese (zh)
Other versions
CN104881149B (en
Inventor
杨贝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Hongxiang Technical Service Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Qizhi Software Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd, Qizhi Software Beijing Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201510354200.1A priority Critical patent/CN104881149B/en
Publication of CN104881149A publication Critical patent/CN104881149A/en
Application granted granted Critical
Publication of CN104881149B publication Critical patent/CN104881149B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides an input method based on a video stream. The input method based on the video stream specifically comprises the steps that the characteristic part in a specific recognized area in the video stream is determined; the movement track of the characteristic part in the video stream is obtained, and at least one piece of code information matched with the movement characteristic part is obtained; a candidate character corresponding to the code information is displayed; the text content in a current user interface text editing area is updated based on a selected character according to a selected command of the candidate character. The invention further provides an input device based on the video stream. According to the input method and device based on the video stream, the situation that the space of a screen is occupied by characters input through a keyboard, and accordingly, the content displayed on the screen is covered can be avoided, and user experience is greatly improved.

Description

Based on input method and the device of video flowing
Technical field
The present invention relates to input method technical field, specifically, the present invention relates to a kind of input method based on video flowing and device.
Background technology
Along with the fast development of internet, popularizing of mobile device, the mobile device such as mobile phone, PAD becomes the necessity of most users, and the input operation carrying out word is all needed at these mobile devices, and then create diversified character input method and interface, as carried out Pinyin Input, handwriting input, stroke input and phonetic entry etc. by keyboard.And the modes such as such as Pinyin Input, handwriting input, stroke input often occupy quite a few area of screen, and phonetic entry easily produces mistake.Although the screen of present mobile device is increasing, it still has certain restricted, and larger screen size can bring higher Experience Degree to user.If in the process of carrying out text event detection, screen is occupied certain area by the interface of text event detection, even hides some contents that screen shows, then can bring great inconvenience to user, greatly reduce Experience Degree.
Summary of the invention
Object of the present invention is intended to solve at least one problem above-mentioned, provides a kind of input method based on video flowing and device.
To achieve these goals, the invention provides a kind of input method based on video flowing, comprise the following steps:
Determine the characteristic portion in specific identification region in video flowing;
Obtain described characteristic portion movement locus in video streaming, identify based on described movement locus, obtain at least one coded message matched with this movement locus;
Show the candidate characters corresponding with described coded message;
In response to the selected instruction to described candidate characters, upgrade the content of text of present user interface text edit area based on selected character.
Further, the method also comprises previous step: start shooting text input mode, controls webcam driver program and opens and initialization camera, to gather video stream data.
Preferably, gather in video stream data process, the light intensity based on periphery determines whether to start flashlamp.
Concrete, described in determine whether that the step starting flashlamp comprises following concrete steps:
The average brightness of the two field picture that statistics camera collection arrives;
Above-mentioned average brightness and predetermined threshold value are compared;
If be greater than predetermined threshold value, judge that light is brighter, control webcam driver stop flashlamp;
If be less than predetermined threshold value, judge dark, control webcam driver program and open flashlamp.
Further, determine that the step of described characteristic portion comprises following concrete steps:
Obtain the profile information focusing object containing characteristic portion;
Texture feature extraction and/or retroreflective feature are to determine characteristic portion;
Store the textural characteristics and/or retroreflective feature that extract follow-uply to carry out contrast and determine this characteristic portion.
Preferably, described in focus object for finger, and described characteristic portion is nail.
Preferably, what in specific identification region, only just leave standstill certain hour focuses its profile information of object acquisition.
Concrete, described specific identification region is the identified region of camera.
Further, the method also comprises the steps: that whether detect described characteristic portion is in described specific identification region, when described characteristic portion is in described specific identification region, shows the first alert messages by user interface; When described characteristic portion exceeds described specific identification region, show the second alert messages by user interface.
Further, when obtaining described characteristic portion movement locus, foundation focuses the blur circle size determination contact point of the imaging point of object, the movement locus of this contact point is defined as the movement locus of characteristic portion.
Concrete, described coded message comprises the coded message corresponding to Chinese character, English alphabet, punctuation mark and control character.
Concrete, the process of at least one coded message that described acquisition and this movement locus match comprises the steps:
Extract the characteristic information of described movement locus;
From the coded message list preset, search the coded message matched with described characteristic information;
Wherein, described coded message list is used for mapping relations data between memory encoding information and characteristic information.
Further, when described characteristic portion exceeds described specific identification region, identify the movement locus recorded before exceeding described specific identification region with characteristic portion, obtain coded message corresponding to this movement locus to determine candidate characters.
Preferably, show in the step of the candidate characters corresponding with described coded message, the degree of approximation between antecedent ground movement locus and described candidate characters sorts to described candidate characters.
Further, described method also comprises the steps: the user instruction receiving the formation of curve touch trajectory, arranges the viewing area of described candidate characters, to make described candidate characters along this curve touch trajectory output display.
Concrete, in the step of the described content of text based on selected character renewal present user interface text edit area, described selected character is added to the cursor place place of text editing area.
Further, the method comprise the steps: to be upgraded by remote interface described in coded message list.
Preferably, make described video flowing invisible in the user interface in the method implementation.
Based on an input media for video flowing, comprising:
Determining unit: for determining the characteristic portion in video flowing in specific identification region;
Recognition unit: for obtaining described characteristic portion movement locus in video streaming, identify based on described movement locus, obtains at least one coded message matched with this movement locus;
Display unit: for showing the candidate characters corresponding with described coded message;
Response unit: in response to the selected instruction to described candidate characters, upgrade the content of text of present user interface text edit area based on selected character.
Further, before determining unit determines described characteristic portion, first perform following steps by collecting unit:
Start shooting text input mode, control webcam driver program and open and initialization camera, to gather video stream data.
Preferably, gather in video stream data process, determined whether to start flashlamp by the light intensity of described collecting unit based on periphery.
Concrete, described collecting unit performs following concrete steps to determine whether to start flashlamp:
The average brightness of the two field picture that statistics camera collection arrives;
Above-mentioned average brightness and predetermined threshold value are compared;
If be greater than predetermined threshold value, judge that light is brighter, control webcam driver stop flashlamp;
If be less than predetermined threshold value, judge dark, control webcam driver program and open flashlamp.
Concrete, described determining unit performs following concrete steps to determine described characteristic portion:
Obtain the profile information focusing object containing characteristic portion;
Texture feature extraction and/or retroreflective feature are to determine characteristic portion;
Store the textural characteristics and/or retroreflective feature that extract follow-uply to carry out contrast and determine this characteristic portion.
Preferably, described in focus object for finger, and described characteristic portion is nail.
Preferably, what in specific identification region, only just leave standstill certain hour focuses its profile information of object acquisition.
Concrete, described specific identification region is the identified region of camera.
Further, described device also comprises detecting unit, for performing following steps:
Whether be in described specific identification region in, when described characteristic portion is in described specific identification region, show the first alert messages by user interface if detecting described characteristic portion; When described characteristic portion exceeds described specific identification region, show the second alert messages by user interface.
Concrete, when described recognition unit obtains described characteristic portion movement locus, foundation focuses the blur circle size determination contact point of the imaging point of object, the movement locus of this contact point is defined as the movement locus of characteristic portion.
Concrete, described coded message comprises the coded message corresponding to Chinese character, English alphabet, punctuation mark and control character.
Concrete, described recognition unit obtains by performing following steps at least one coded message matched with this movement locus:
Extract the characteristic information of described movement locus;
From the coded message list preset, search the coded message matched with described characteristic information;
Wherein, described coded message list is used for mapping relations data between memory encoding information and characteristic information.
Further, when described characteristic portion exceeds described specific identification region, the movement locus that described recognition unit identification is recorded before exceeding described specific identification region with characteristic portion, obtains coded message corresponding to this movement locus to determine candidate characters.
Preferably, the degree of approximation between described display unit antecedent ground movement locus and described candidate characters sorts to described candidate characters.
Further, described device also comprises setting unit, for receiving the user instruction that curve touch trajectory is formed, arranges the viewing area of described candidate characters, to make described candidate characters along this curve touch trajectory output display.
Concrete, described selected character, in response to the selected instruction to described candidate characters, is added to the cursor place place of text editing area, to upgrade the content of text of present user interface text edit area by described response unit.
Further, described device also comprises updating block, for the coded message list described in being upgraded by remote interface.
Preferably, described device makes described video flowing invisible in the user interface.
Compared to existing technology, the solution of the present invention has the following advantages:
1, video stream data based on camera collection obtains characteristic portion movement locus, by identifying described movement locus determination candidate characters, when user by manually writing in the discernible specific region of camera, just can realize determining and the character that the track manually write matches, realize user version input, because the process of shooting with video-corder of camera and interface can have nothing to do, user writing process also can depart from screen, thus, the method of the invention compares traditional input mode, avoid occurring that character input keyboard occupies screen space and hides the situation of most of displaying contents on screen, the Experience Degree of user can be improved.
2, characteristic portion or contact point is provided whether to exceed the measuring ability of camera identified region, then the second alert messages is shown on the corresponding border exceeding region when there is the situation exceeding camera identified region, make adjustment with reminding user, avoid camera to obtain the movement locus less than characteristic portion or contact point and cause the wrong identification of character, thus cause user to re-enter, reduce Experience Degree.
3, the curvilinear path of the usual formation of sliding along screen touch-control of user is detected, this curvilinear path is set to candidate characters viewing area, candidate characters is shown along described curvilinear path according to degree of approximation sequence, be convenient to user select candidate characters, improve input efficiency, strengthen user experience.
The aspect that the present invention adds and advantage will part provide in the following description, and these will become obvious from the following description, or be recognized by practice of the present invention.
Accompanying drawing explanation
The present invention above-mentioned and/or additional aspect and advantage will become obvious and easy understand from the following description of the accompanying drawings of embodiments, wherein:
Fig. 1 is the input method schematic flow sheet based on video flowing of the present invention;
Fig. 2 is the theory diagram of the input media based on video flowing of the present invention;
Fig. 3 is for the mobile terminal screen described in the embodiment of the present invention is as the schematic diagram at prompting interface.
Embodiment
Be described below in detail embodiments of the invention, the example of described embodiment is shown in the drawings, and wherein same or similar label represents same or similar element or has element that is identical or similar functions from start to finish.Being exemplary below by the embodiment be described with reference to the drawings, only for explaining the present invention, and can not limitation of the present invention being interpreted as.
Those skilled in the art of the present technique are appreciated that unless expressly stated, and singulative used herein " ", " one ", " described " and " being somebody's turn to do " also can comprise plural form.Should be further understood that, the wording used in instructions of the present invention " comprises " and refers to there is described feature, integer, step, operation, element and/or assembly, but does not get rid of and exist or add other features one or more, integer, step, operation, element, assembly and/or their group.Should be appreciated that, when we claim element to be " connected " or " coupling " to another element time, it can be directly connected or coupled to other elements, or also can there is intermediary element.In addition, " connection " used herein or " coupling " can comprise wireless connections or wirelessly to couple.Wording "and/or" used herein comprises one or more whole or arbitrary unit listing item be associated and all combinations.
Those skilled in the art of the present technique are appreciated that unless otherwise defined, and all terms used herein (comprising technical term and scientific terminology), have the meaning identical with the general understanding of the those of ordinary skill in field belonging to the present invention.It should also be understood that, those terms defined in such as general dictionary, should be understood to that there is the meaning consistent with the meaning in the context of prior art, unless and by specific definitions as here, otherwise can not explain by idealized or too formal implication.
Those skilled in the art of the present technique are appreciated that, here used " terminal ", " terminal device " had both comprised the equipment of wireless signal receiver, it only possesses the equipment of the wireless signal receiver without emissive ability, comprise again the equipment receiving and launch hardware, it has and on bidirectional communication link, can perform the reception of two-way communication and launch the equipment of hardware.This equipment can comprise: honeycomb or other communication facilitiess, its honeycomb or other communication facilities of having single line display or multi-line display or not having multi-line display; PCS (Personal Communications Service, PCS Personal Communications System), it can combine voice, data processing, fax and/or its communication ability; PDA (PersonalDigital Assistant, personal digital assistant), it can comprise radio frequency receiver, pager, the Internet/intranet access, web browser, notepad, calendar and/or GPS (Global PositioningSystem, GPS) receiver; Conventional laptop and/or palmtop computer or other equipment, it has and/or comprises the conventional laptop of radio frequency receiver and/or palmtop computer or other equipment.Here used " terminal ", " terminal device " can be portable, can transport, be arranged in the vehicles (aviation, sea-freight and/or land), or be suitable for and/or be configured at local runtime, and/or with distribution form, any other position operating in the earth and/or space is run.Here used " terminal ", " terminal device " can also be communication terminal, access terminals, music/video playback terminal, can be such as PDA, MID (Mobile Internet Device, mobile internet device) and/or there is the mobile phone of music/video playing function, also can be the equipment such as intelligent television, Set Top Box.
Those skilled in the art of the present technique are appreciated that used remote network devices here, and it includes but not limited to the cloud that computing machine, network host, single network server, multiple webserver collection or multiple server are formed.At this, cloud is formed by based on a large amount of computing machine of cloud computing (Cloud Computing) or the webserver, and wherein, cloud computing is the one of Distributed Calculation, the super virtual machine be made up of a group loosely-coupled computing machine collection.In embodiments of the invention, realize communicating by any communication mode between remote network devices, terminal device with WNS server, include but not limited to, the mobile communication based on 3GPP, LTE, WIMAX, the computer network communication based on TCP/IP, udp protocol and the low coverage wireless transmission method based on bluetooth, Infrared Transmission standard.
Shown in figure 1, the invention provides a kind of input method based on video flowing, comprise the following steps:
S101, determine the characteristic portion in specific identification region in video flowing;
Described video flowing is by driving the camera shooting of smart machine to obtain, therefore, before this step of execution, first start camera to gather video stream data, open and initialization camera, to gather video stream data especially by the driver controlling camera.After initialization camera, without the need to there being preview interface on the screen of mobile terminal, make the video stream data that collects invisible thus.Certainly, certain position that also according to circumstances can be placed in user interface shows, and the size of its display can be adjusted by user, or can be selected whether to show described preview interface by user.Thus avoid preview interface to occupy the screen space of quite a few, but video stream data still gathers.In collection video flowing process, determine whether to start flashlamp, specifically by the light intensity of following reference method determination periphery according to the light intensity of periphery:
1. add up the average brightness of the two field picture that camera collection arrives;
In a particular embodiment, obtain the brightness value of each pixel in multi-frame video image, i.e. gray-scale value, calculate the mean value of the brightness value of the pixel of multi-frame video image.
2. above-mentioned average brightness and predetermined threshold value are compared;
The average brightness calculated and the threshold value preset are compared, the average brightness of the video image that described predetermined threshold value is gathered when light is good by camera is determined.
If be 3. greater than predetermined threshold value, judge that light is brighter, control webcam driver stop flashlamp;
If the result of above-mentioned comparison is the average brightness of current multi-frame video image be greater than predetermined threshold value, then input method procedure calls the relevant interface of camera, closes flashlamp to control driver.
If be 4. less than predetermined threshold value, judge dark, control webcam driver program and open flashlamp.
If the result of above-mentioned comparison is the average brightness of current multi-frame video image be less than predetermined threshold value, then input method procedure calls the relevant interface of camera, opens flashlamp to control driver.
Determine described in the embodiment of the present invention that ambient light intensity can adopt additive method to substitute, therefore above-mentioned reference method is not as a limitation of the invention.
In the video flowing gathered, catching moving target, focusing to performing the object write, to determine described characteristic portion, describedly determine that characteristic portion specifically comprises the following steps:
1. obtain the profile information focusing object containing characteristic portion;
Mixed Gauss model specifically can be adopted to carry out background modeling to extract background image, the image of every frame and background image are done difference, thus obtain the profile information focusing object containing characteristic portion.Wherein, in order to judging characteristic position better, preferably, object being focused in advance and leave standstill certain hour, as 3 seconds, focusing on focusing object better to make camera.
2. texture feature extraction and/or retroreflective feature are to determine characteristic portion;
Because the textural characteristics of characteristic portion and/or retroreflective feature are different from other positions focusing object, therefore by extracting the textural characteristics and/or the retroreflective feature that focus object, part large for discrimination is defined as described characteristic portion.The input operation of most convenient because user adopts hand-written, and the textural characteristics of nail and/or retroreflective feature better, therefore preferably, described in focus object for finger, described characteristic portion is nail.
3. store the textural characteristics extracted and/or retroreflective feature follow-uply to carry out contrast and determine this characteristic portion.
The textural characteristics of said extracted and/or retroreflective feature are stored, by the standard feature Model Matching of the characteristic portion that will extract in subsequent video images and this storage, to be confirmed whether as described characteristic portion, thus be convenient to follow-uply carry out Tracking Recognition to characteristic portion.
Visible, described characteristic portion to carry out character calligraph, just can form corresponding movement locus in the surface movement of arbitrary objects in video streaming.Therefore, determining characteristic portion, is realize the basis to described movement locus identification.
S102, obtain described characteristic portion movement locus in video streaming, identify based on described movement locus, obtain at least one coded message matched with this movement locus;
Based on the mobile movement locus formed of the above-mentioned characteristic portion determined, obtain the character that its movement locus carries out identifying to determine user writing.Described characteristic portion is being carried out in the process of Tracking Recognition, guarantee that this characteristic portion is in the identified region of camera, be i.e. specific identification region of the present invention.For this reason, whether the present invention detects described characteristic portion in real time and is in the identified region of camera, when described characteristic portion is in the identified region of camera, then shows the first alert messages by user interface; When described characteristic portion exceeds the identified region of camera, then show the second alert messages by user interface.
Concrete, shown in figure 3, using the user interface of whole screen as alert messages, the four edges of screen periphery shows ABCD tetra-points respectively, can characteristic portion be detected in the video image of continuous multiple frames, then this characteristic portion is in described specific identification region, then four points of screen perimeter all show green, i.e. the first alert messages, writes to point out user, when described characteristic portion being detected in a certain frame video image in testing process, described characteristic portion exceeds the identified region of camera, the position of characteristic portion described in the last frame video image then determining to detect described characteristic portion, near which border of screen, then the point on corresponding border is shown in red, i.e. the second alert messages, as as described in the position of characteristic portion in discernible video frame image near the right, C point display then on the right of screen is red, thus reminding user exceeds the identified region on the right of camera, should stop carrying out writing operating and adjusting, user is caused to re-enter to avoid wrong identification, cause Experience Degree not good.
In other embodiments, the movement locus of described characteristic portion also can be determined by the movement locus of contact point, is about to detect the contact point in video image, using the movement locus of the movement locus of this contact point as described characteristic portion.Described contact point refers to the imaging point the most clearly focusing the contact of object on input plane and generate on focal plane.Specifically by focusing the blur circle size determination contact point of imaging point on focal plane of object.Wherein, when described blur circle is objective point imaging, because aberration makes its imaging beam can not converge at a bit, at the circular projection as the diffusion that plane is formed.The larger imaging of blur circle is fuzzyyer, otherwise more clear.After the locking of focal plane, point imaging on focal plane is the most clear, therefore each frame video image and its former frame video can be contrasted, determine the difference section of two frame video images, and judge that whether the sharpness of this difference section is consistent with the imaging definition of focal plane, respective sharpness is determined by the blur circle size of imaging point, if the difference of the imaging point blur circle of the difference section of adjacent two frame video images and the imaging point blur circle of actual focal plane is less than 10%, then judge that this difference section is contact point.
Before determining contact point, first to guarantee that input plane does not have other objects, and possess good flatness.Owing to first will taking in advance input plane and lock focal plane, if now input plane exists other objects, then when taking in advance input plane, camera auto-focusing can be caused on other objects, and the focal plane of locking just has maximum error.Therefore needed first to guarantee the smooth no-sundries of input plane before determining contact point, to lock correct focal plane.
In the movement locus process of Tracking Recognition contact point, also to prevent described contact point from exceeding camera identification range.When contact point exceeds the identification range of camera, the image blur of camera, the MCU (micro-control unit MicrocontrollerUnit) of corresponding indication information to smart machine will be sent, MCU can show red some indicating user at the correspondence position at the edge of screen, and the correspondence position of described screen edge is determined by the contacting points position that can detect in the last frame video image of described characteristic portion.Thus, ensure that camera correctly gets the movement locus of contact point, to avoid wrong identification to cause user to re-enter, cause Experience Degree not good.
In conjunction with the constraint condition that the invention described above provides, those skilled in the art can obtain movement locus required for the present invention from video flowing.The method of described acquisition movement locus belongs to known technology, does not repeat them here.After getting the movement locus of described characteristic portion or contact point, identify based on this movement locus, identified by the individual features information extracting described movement locus, described characteristic information can be described by gradient, angle point, sift feature etc.Mate with the coded message in coded message list based on this characteristic information, obtain at least one coded message matched with this movement locus.Detailed process comprises the steps:
1. extract the characteristic information of described movement locus;
2., from the coded message list preset, search the coded message matched with described characteristic information.
Wherein, described the encoding list information is used for the mapping relations between association store coded message and characteristic information.The corresponding coded message such as described coded message comprises Chinese character, English alphabet, punctuation mark, control meet, the specifically discernible coded message for characterizing respective symbols of finger mobile device, as ASCII character value.
In a particular embodiment, pre-arranged code information list, first the relation between character and its characteristic information is set up, be specially the corresponding relation between Chinese character and the characteristic information of Chinese character, corresponding relation between English alphabet and the characteristic information of this letter, corresponding relation between punctuation mark and its characteristic information, controls the corresponding relation met between its characteristic information.Corresponding relation between character and its coded message is known, and then sets up the mapping relations between described character and its coded message.Corresponding to the renewal of input method character library, described coded message list pushes renewal by cloud server automatically by remote port, to increase or to upgrade the character information of input method.
In a particular embodiment, when user is at standardized of input plane arrow to the right, the ASCII character value of then searching in coded message list the characteristic information of this arrow drawn recorded to the right corresponding is 127, namely the character identifying user's input is " deletion " control character, by the character deletion at the now cursor place of text edit area.In like manner, when user writes Chinese comma at input plane, get corresponding movement locus and characteristic information extraction, search the ASCII character value 60 that the characteristic information of this Chinese comma recorded in coded message list is corresponding, namely the character identifying user's input is the punctuation character of Chinese comma, Chinese comma is presented at the cursor place of the text edit area of active user.In like manner, obtaining the track of Chinese character or English character, and extract corresponding characteristic information, determining its coded message by searching coded message list, by mobile device by Charactes Display corresponding for this coded message to user.
In the process that the movement locus of described characteristic portion or contact point is identified, its movement locus of real-time preservation, and Real time identification is carried out to the movement locus preserved, once exist and the character mated in coded message list, then show this character and the movement locus stored before this is deleted, empty the motion track information stored in internal memory, then restart new track identification process.When described characteristic portion exceeds the identified region of camera or contact point leaves focal plane, then preserve the motion track information recorded before exceeding described identified region, and the movement locus of this storage is identified, if there is the character of coupling in the coded message of this movement locus obtained in coded message list, then by this character alternatively Charactes Display, if there is no the character mated then continues to record new movement locus, and Real time identification, until the candidate characters that match cognization makes new advances.Realize the identification to movement locus thus, obtain corresponding candidate characters, so that its character write of the follow-up user of being shown to.
As can be seen here, this step achieves and obtain movement locus from video flowing, carries out character recognition to movement locus, and obtains the process of the coded message after corresponding identification, this process, under the specification of the invention described above, can be implemented by those skilled in the art easily.
S103, the display candidate characters corresponding with described coded message;
Determine to there are the mapping relations of relevance between wherein said coded message and described image information features by the image information features that movement locus that described characteristic portion is formed is corresponding by coded message.In order to optimization shows described candidate characters, according to the degree of approximation between described movement locus and described candidate characters, candidate characters is sorted, before the candidate characters that similarity is high comes, wherein said similarity is determined by the matching degree between the characteristic information of movement locus and the characteristic information of described candidate characters.Preferably come the highest candidate characters of the similarity of foremost as the correct characters identified based on movement locus.
The candidate characters that described in the invention process, display is corresponding with coded message, shows with the usual use habit of user.Especially by the usual gesture of detection user when on-screen options is done, determine the viewing area of candidate characters, further, during the hand-held device of usual user's handling as smart mobile phone class, four refer to be generally used for gripping fuselage, thumb slides on screen, touch, the operation such as to choose, thus for the ease of adopting thumb to select candidate characters, first the curve touch trajectory of user on screen is detected, this curvilinear path is set to the viewing area of described candidate characters, the track of candidate characters along this curve is shown by sequencing of similarity successively, be convenient to user select candidate characters, improve user experience.
Under normal circumstances, the candidate characters that matching degree is the highest is generally the character of the correct user writing identified, if made a mistake, then the accuracy of the candidate characters that subsequent match degree is lower is not high yet, therefore in order to improve user experience further, can by candidate characters the highest for matching degree, i.e. the candidate characters of sequence first is updated to the text editing interface of active user automatically, to save the time that user selects correct characters, improve input efficiency.
S104, in response to the selected instruction to described candidate characters, upgrade the content of text of present user interface text edit area based on selected character.
Described selected instruction, comprise by choose associated candidate characters to realize selected, by keyboard or virtual key and by phonetic order realize selected.
Receive user to the selected instruction of candidate characters, this instruction is responded, is specially the text edit area selected character being updated to current user interface.Further, described chosen character being added to the cursor place place of text editing area, when character is instruction character in elected, as being deletion action symbol, then the single text character before or after the place of cursor place being deleted.
For android system, brief description is done to the process that system application carries out input operation by input method of the present invention.Concrete, when user carries out input operation in the text edit box of application program, first input method can be selected, application program can call the setInputMethod method of class InputMethodManager, bindInput method in system call InputMethod, user-selected input method and this application program are bound, if there is the input method of acquiescence selection, then without the need to binding.After this, onCreate method initialization input method in system call input method of the present invention, call getCurrentInputConnection method and obtain the InputConnection object mutual with InputMethod, call the text edit area that character corresponding for the movement locus obtained by camera seizure of input method identification of the present invention is updated to present user interface by InputConnection.setComposingText ().
Above-mentionedly do brief example explanation for the operating process of android system to input method of the present invention, but this can not as limitation of the present invention, be suitable for too corresponding to mobile terminal operating systems such as IOS system, Windows phone, those skilled in the art should know the accommodation made based on the inventive method, to be applied to different operating system.
Shown in figure 2, the present invention also provides a kind of input media based on video flowing, comprises determining unit 11, recognition unit 12, display unit 13, response unit 14, comprises collecting unit 15, detecting unit 16, setting unit 17 in addition, wherein,
Determining unit 11 is for determining the characteristic portion in video flowing in specific identification region;
Described video flowing is by driving the camera shooting of smart machine to obtain, therefore, before this step of execution, first start camera to gather video stream data, specifically controlled the driver unlatching also initialization camera of camera by the collecting unit 15 in apparatus of the present invention, to gather video stream data.After initialization camera, without the need to there being preview interface on the screen of mobile terminal, make the video stream data that collects invisible thus.Certainly, certain position that also according to circumstances can be placed in user interface shows, and the size of its display can be adjusted by user, or can be selected whether to show described preview interface by user.Thus avoid preview interface to occupy the screen space of quite a few, but video stream data still gathers.In collection video flowing process, collecting unit 15 determines whether to start flashlamp according to the light intensity of periphery, by performing the light intensity of following steps determination periphery:
1. add up the average brightness of the two field picture that camera collection arrives;
In a particular embodiment, obtain the brightness value of each pixel in multi-frame video image, i.e. gray-scale value, calculate the mean value of the brightness value of the pixel of multi-frame video image.
2. above-mentioned average brightness and predetermined threshold value are compared;
The average brightness calculated and the threshold value preset are compared, the average brightness of the video image that described predetermined threshold value is gathered when light is good by camera is determined.
If be 3. greater than predetermined threshold value, judge that light is brighter, control webcam driver stop flashlamp;
If the result of above-mentioned comparison is the average brightness of current multi-frame video image be greater than predetermined threshold value, then input method procedure calls the relevant interface of camera, closes flashlamp to control driver.
If be 4. less than predetermined threshold value, judge dark, control webcam driver program and open flashlamp.
If the result of above-mentioned comparison is the average brightness of current multi-frame video image be less than predetermined threshold value, then input method procedure calls the relevant interface of camera, opens flashlamp to control driver.
The determination of collecting unit described in embodiment of the present invention ambient light intensity also can adopt additive method, not as a limitation of the invention.
Determining unit 11 catches moving target in the video flowing gathered, and focus to performing the object write, to determine described characteristic portion, described determining unit 11 determines that the concrete steps that characteristic portion performs are as follows:
1. obtain the profile information focusing object containing characteristic portion;
Mixed Gauss model specifically can be adopted to carry out background modeling to extract background image, the image of every frame and background image are done difference, thus obtain the profile information focusing object containing characteristic portion.Wherein, in order to judging characteristic position better, preferably, object being focused in advance and leave standstill certain hour, as 3 seconds, focusing on focusing object better to make camera.
2. texture feature extraction and/or retroreflective feature are to determine characteristic portion;
Because the textural characteristics of characteristic portion and/or retroreflective feature are different from other positions focusing object, therefore by extracting the textural characteristics and/or the retroreflective feature that focus object, part large for discrimination is defined as described characteristic portion.The input operation of most convenient because user adopts hand-written, and the textural characteristics of nail and/or retroreflective feature better, therefore preferably, described in focus object for finger, described characteristic portion is nail.
3. store the textural characteristics extracted and/or retroreflective feature follow-uply to carry out contrast and determine this characteristic portion.
The textural characteristics of said extracted and/or retroreflective feature are stored, by the standard feature Model Matching of the characteristic portion that will extract in subsequent video images and this storage, to be confirmed whether as described characteristic portion, thus be convenient to follow-uply carry out Tracking Recognition to characteristic portion.
Visible, described characteristic portion to carry out character calligraph, just can form corresponding movement locus in the surface movement of arbitrary objects in video streaming.Therefore, determining characteristic portion, is realize the basis to described movement locus identification.
Recognition unit 12, for obtaining described characteristic portion movement locus in video streaming, identifies based on described movement locus, obtains at least one coded message matched with this movement locus;
Based on the mobile movement locus formed of the above-mentioned characteristic portion determined, recognition unit 12 obtains the character that its movement locus carries out identifying to determine user writing.Described characteristic portion is being carried out in the process of Tracking Recognition, guarantee that this characteristic portion is in the identified region of camera, be i.e. specific identification region of the present invention.Whether in addition, the present invention also comprises detecting unit 16, detect described characteristic portion in real time and be in the identified region of camera, when described characteristic portion is in the identified region of camera, then show the first alert messages by user interface by detecting unit 16; When described characteristic portion exceeds the identified region of camera, then show the second alert messages by user interface.
Concrete, shown in figure 3, using the user interface of whole screen as alert messages, the four edges of screen periphery shows ABCD tetra-points respectively, can characteristic portion be detected in the video image of continuous multiple frames, then this characteristic portion is in described specific identification region, then four points of screen perimeter all show green, i.e. the first alert messages, writes to point out user, in detecting unit 16 testing process, when described characteristic portion being detected in a certain frame video image, described characteristic portion exceeds the identified region of camera, the position of characteristic portion described in the last frame video image then determining to detect described characteristic portion, near which border of screen, then the point on corresponding border is shown in red, i.e. the second alert messages, as as described in the position of characteristic portion in discernible video frame image near the right, C point display then on the right of screen is red, thus reminding user exceeds the identified region on the right of camera, should stop carrying out writing operating and adjusting, user is caused to re-enter to avoid wrong identification, cause Experience Degree not good.
In other embodiments, the movement locus of described characteristic portion also can be determined by the movement locus of contact point, the contact point in the video image namely detected by detecting unit 16, using the movement locus of the movement locus of this contact point as described characteristic portion.Described contact point refers to the imaging point the most clearly focusing the contact of object on input plane and generate on focal plane.Specifically by focusing the blur circle size determination contact point of imaging point on focal plane of object.Wherein, when described blur circle is objective point imaging, because aberration makes its imaging beam can not converge at a bit, at the circular projection as the diffusion that plane is formed.The larger imaging of blur circle is fuzzyyer, otherwise more clear.After the locking of focal plane, point imaging on focal plane is the most clear, therefore each frame video image and its former frame video can be contrasted, determine the difference section of two frame video images, and judge that whether the sharpness of this difference section is consistent with the imaging definition of focal plane, respective sharpness is determined by the blur circle size of imaging point, if the difference of the imaging point blur circle of the difference section of adjacent two frame video images and the imaging point blur circle of actual focal plane is less than 10%, then judge that this difference section is contact point.
Before determining contact point, first to guarantee that input plane does not have other objects, and possess good flatness.Owing to first will taking in advance input plane and lock focal plane, if now input plane exists other objects, then when taking in advance input plane, camera auto-focusing can be caused on other objects, and the focal plane of locking just has maximum error.Therefore needed first to guarantee the smooth no-sundries of input plane before determining contact point, to lock correct focal plane.
In the movement locus process of Tracking Recognition contact point, also to prevent described contact point from exceeding camera identification range.When contact point exceeds the identification range of camera, the image blur of camera, the MCU (micro-control unit MicrocontrollerUnit) of corresponding indication information to smart machine will be sent, MCU can show red some indicating user at the correspondence position at the edge of screen, and the correspondence position of described screen edge is determined by the contacting points position that can detect in the last frame video image of described characteristic portion.Thus, ensure that camera correctly gets the movement locus of contact point, to avoid wrong identification to cause user to re-enter, cause Experience Degree not good.
In conjunction with the constraint condition that the invention described above provides, those skilled in the art can obtain movement locus required for the present invention from video flowing.The method of described acquisition movement locus belongs to known technology, does not repeat them here.After described recognition unit 12 gets the movement locus of described characteristic portion or contact point, identify based on this movement locus, identified by the individual features information extracting described movement locus, described characteristic information can be described by gradient, angle point, sift feature etc.Mate with the coded message in coded message list based on this characteristic information, obtain at least one coded message matched with this movement locus.Described recognition unit 12 specifically performs following steps:
1. extract the characteristic information of described movement locus;
2., from the coded message list preset, search the coded message matched with described characteristic information.
Wherein, described the encoding list information is used for the mapping relations between association store coded message and characteristic information.The corresponding coded message such as described coded message comprises Chinese character, English alphabet, punctuation mark, control meet, the specifically discernible coded message for characterizing respective symbols of finger mobile device, as ASCII character value.
In a particular embodiment, pre-arranged code information list, first the relation between character and its characteristic information is set up, be specially the corresponding relation between Chinese character and the characteristic information of Chinese character, corresponding relation between English alphabet and the characteristic information of this letter, corresponding relation between punctuation mark and its characteristic information, controls the corresponding relation met between its characteristic information.Corresponding relation between character and its coded message is known, and then sets up the mapping relations between described character and its coded message.Corresponding to the renewal of input method character library, described coded message list pushes renewal by cloud server automatically by remote port, to increase or to upgrade the character information of input method.
In a particular embodiment, when user is at standardized of input plane arrow to the right, the ASCII character value of then searching in coded message list the characteristic information of this arrow drawn recorded to the right corresponding is 127, namely the character identifying user's input is " deletion " control character, by the character deletion at the now cursor place of text edit area.In like manner, when user writes Chinese comma at input plane, get corresponding movement locus and characteristic information extraction, search the ASCII character value 60 that the characteristic information of this Chinese comma recorded in coded message list is corresponding, namely the character identifying user's input is the punctuation character of Chinese comma, Chinese comma is presented at the cursor place of the text edit area of active user.In like manner, obtaining the track of Chinese character or English character, and extract corresponding characteristic information, determining its coded message by searching coded message list, by mobile device by Charactes Display corresponding for this coded message to user.
In the process that the movement locus of described characteristic portion or contact point is identified, recognition unit also needs to preserve its movement locus in real time, and Real time identification is carried out to the movement locus preserved, once exist and the character mated in coded message list, then show this character and the movement locus stored before this is deleted, empty the motion track information stored in internal memory, then restart new track identification process.When described characteristic portion exceeds the identified region of camera or contact point leaves focal plane, then preserve the motion track information recorded before exceeding described identified region, and the movement locus of this storage is identified, if there is the character of coupling in the coded message of this movement locus obtained in coded message list, then by this character alternatively Charactes Display, if there is no the character mated then continues to record new movement locus, and Real time identification, until the candidate characters that match cognization makes new advances.Realize the identification to movement locus thus, obtain corresponding candidate characters, to be shown to its character write of user.
As can be seen here, recognition unit 12 of the present invention achieves and obtain movement locus from video flowing, carries out character recognition to movement locus, and obtains the process of the coded message after corresponding identification, this process, under the specification of the invention described above, can be implemented by those skilled in the art easily.
Display unit 13 is for showing the candidate characters corresponding with described coded message;
Determine to there are the mapping relations of relevance between wherein said coded message and described image information features by the image information features that movement locus that described characteristic portion is formed is corresponding by coded message.In order to optimization shows described candidate characters, according to the degree of approximation between described movement locus and described candidate characters, candidate characters is sorted, before the candidate characters that similarity is high comes, wherein said similarity is determined by the matching degree between the characteristic information of movement locus and the characteristic information of described candidate characters.Preferably come the highest candidate characters of the similarity of foremost as the correct characters identified based on movement locus.
The candidate characters that the display of display unit 13 described in the invention process is corresponding with coded message, shows with the usual use habit of user.Setting unit 17 especially by device of the present invention detects the usual gesture of user when on-screen options is done in advance, determines the usual touch trajectory of user on screen, and arranging this usual touch trajectory is the viewing area of candidate characters.Further, during the hand-held device of usual user's handling as smart mobile phone class, four refer to be generally used for gripping fuselage, thumb slides on screen, touch, the operation such as to choose, thus for the ease of adopting thumb to select candidate characters, first the curve touch trajectory of user on screen is detected by setting unit 17, and this curvilinear path is set to the viewing area of described candidate characters, then by display unit 13, the track of candidate characters along this curve is shown by sequencing of similarity successively, be convenient to user select candidate characters, improve user experience.
Under normal circumstances, the candidate characters that matching degree is the highest is generally the character of the correct user writing identified, if made a mistake, then the accuracy of the candidate characters that subsequent match degree is lower is not high yet, therefore in order to improve user experience further, can by candidate characters the highest for matching degree, i.e. the candidate characters of sequence first is updated to the text editing interface of active user automatically, to save the time that user selects correct characters, improve input efficiency.
Response unit 14, in response to the selected instruction to described candidate characters, upgrades the content of text of present user interface text edit area based on selected character.
Response unit 14 receives the selected instruction of user to candidate characters, responds, be specially the text edit area selected character being updated to current user interface to this instruction.Further, described chosen character being added to the cursor place place of text editing area, when character is instruction character in elected, as being deletion action symbol, then the single text character at cursor place place being deleted.
For android system, brief description is done to the process that system application carries out input operation by input method of the present invention.Concrete, when user carries out input operation in the text edit box of application program, first input method can be selected, application program can call the setInputMethod method of class InputMethodManager, bindInput method in system call InputMethod, user-selected input method and this application program are bound, if there is the input method of acquiescence selection, then without the need to binding.After this, onCreate method initialization input method in system call input method of the present invention, call getCurrentInputConnection method and obtain the InputConnection object mutual with InputMethod, call the text edit area that character corresponding for the movement locus obtained by camera seizure of input method identification of the present invention is updated to present user interface by InputConnection.setComposingText ().
Above-mentionedly do brief example explanation for the operating process of android system to input method of the present invention, but this can not as limitation of the present invention, be suitable for too corresponding to mobile terminal operating systems such as IOS system, Windowsphone, those skilled in the art should know based on the accommodation done by the inventive method, to be applied to different operating system.
In order to further set forth the method for the invention, providing and realizing application scenarios as next the method for the invention or device.
In the mobile phone terminal achieving method of the present invention or device, the mobile phone interface that the application program used by certain user provides, this interface display one occupies the text edit area of mobile phone terminal screen major part area, the input method based on video flowing that this application call method of the present invention or device realize, starts camera to gather the video stream data of user writing text.When all showing at green on four limits of mobile phone terminal screen, user can start to write operation.In writing process, the movement locus of the characterizing consumer input character in video flowing is identified as corresponding candidate characters by described input method, and this candidate characters is shown a line along the usual touch-control curve of sliding of user's finger on screen, and then the instruction that reception user selectes arbitrary character of the candidate characters of display, this candidate characters is shown to the current text edit area of user.And in the process of user writing, the input method according to realization of the present invention carries out Real time identification to the movement locus write, the candidate characters of corresponding display is also doing corresponding dynamic change.In addition, if beyond the identified region of camera in writing process, then show red point on a corresponding border on four borders of mobile phone terminal screen, which border to have exceeded camera identified region to point out user on.Thus, user just can complete corresponding text entry operation based on the input method based on video flowing of the present invention or device, and the candidate characters of display only takies a line screen space, larger space is for showing text edit area, make user more smoothly can complete character input more efficiently, promote the Experience Degree of copy editor.
In sum, input method based on video flowing of the present invention and device, the movement locus gathered by identification camera determines the character text that user inputs, solve conventional keyboard input and occupy the not good problem of Experience Degree that quite a few area of screen causes, improve input efficiency simultaneously, make user more efficiently can carry out text editing operations on mobile terminals.
The above is only some embodiments of the present invention; it should be pointed out that for those skilled in the art, under the premise without departing from the principles of the invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (10)

1. based on an input method for video flowing, it is characterized in that, comprise the following steps:
Determine the characteristic portion in specific identification region in video flowing;
Obtain described characteristic portion movement locus in video streaming, identify based on described movement locus, obtain at least one coded message matched with this movement locus;
Show the candidate characters corresponding with described coded message;
In response to the selected instruction to described candidate characters, upgrade the content of text of present user interface text edit area based on selected character.
2. method according to claim 1, is characterized in that, the method also comprises previous step: start shooting text input mode, controls webcam driver program and opens and initialization camera, to gather video stream data.
3. method according to claim 2, is characterized in that, gathers in video stream data process, and the light intensity based on periphery determines whether to start flashlamp.
4. method according to claim 3, is characterized in that, described in determine whether that starting the step of flashlamp comprises following concrete steps:
The average brightness of the two field picture that statistics camera collection arrives;
Above-mentioned average brightness and predetermined threshold value are compared;
If be greater than predetermined threshold value, judge that light is brighter, control webcam driver stop flashlamp;
If be less than predetermined threshold value, judge dark, control webcam driver program and open flashlamp.
5. method according to claim 1, is characterized in that, determines that the step of described characteristic portion comprises following concrete steps:
Obtain the profile information focusing object containing characteristic portion;
Texture feature extraction and/or retroreflective feature are to determine characteristic portion;
Store the textural characteristics and/or retroreflective feature that extract follow-uply to carry out contrast and determine this characteristic portion.
6. based on an input media for video flowing, it is characterized in that, comprising:
Determining unit: for determining the characteristic portion in video flowing in specific identification region;
Recognition unit: for obtaining described characteristic portion movement locus in video streaming, identify based on described movement locus, obtains at least one coded message matched with this movement locus;
Display unit: for showing the candidate characters corresponding with described coded message;
Response unit: in response to the selected instruction to described candidate characters, upgrade the content of text of present user interface text edit area based on selected character.
7. device according to claim 6, is characterized in that, before determining unit determines described characteristic portion, first performs following steps by collecting unit:
Start shooting text input mode, control webcam driver program and open and initialization camera, to gather video stream data.
8. device according to claim 6, is characterized in that, gathers in video stream data process, is determined whether to start flashlamp by the light intensity of described collecting unit based on periphery.
9. device according to claim 8, is characterized in that, described collecting unit performs following concrete steps to determine whether to start flashlamp:
The average brightness of the two field picture that statistics camera collection arrives;
Above-mentioned average brightness and predetermined threshold value are compared;
If be greater than predetermined threshold value, judge that light is brighter, control webcam driver stop flashlamp;
If be less than predetermined threshold value, judge dark, control webcam driver program and open flashlamp.
10. device according to claim 6, is characterized in that, described determining unit performs following concrete steps to determine described characteristic portion:
Obtain the profile information focusing object containing characteristic portion;
Texture feature extraction and/or retroreflective feature are to determine characteristic portion;
Store the textural characteristics and/or retroreflective feature that extract follow-uply to carry out contrast and determine this characteristic portion.
CN201510354200.1A 2015-06-24 2015-06-24 Input method and device based on video flowing Active CN104881149B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510354200.1A CN104881149B (en) 2015-06-24 2015-06-24 Input method and device based on video flowing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510354200.1A CN104881149B (en) 2015-06-24 2015-06-24 Input method and device based on video flowing

Publications (2)

Publication Number Publication Date
CN104881149A true CN104881149A (en) 2015-09-02
CN104881149B CN104881149B (en) 2018-04-20

Family

ID=53948671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510354200.1A Active CN104881149B (en) 2015-06-24 2015-06-24 Input method and device based on video flowing

Country Status (1)

Country Link
CN (1) CN104881149B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113204283A (en) * 2021-04-30 2021-08-03 Oppo广东移动通信有限公司 Text input method, text input device, storage medium and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101059840A (en) * 2007-05-24 2007-10-24 深圳市杰特电信控股有限公司 Words input method using mobile phone shooting style
WO2009111138A1 (en) * 2008-03-04 2009-09-11 Apple Inc. Handwriting recognition interface on a device
CN103019590A (en) * 2012-11-26 2013-04-03 上海量明科技发展有限公司 Method, client side and system for inputting handwriting characters and character strings

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101059840A (en) * 2007-05-24 2007-10-24 深圳市杰特电信控股有限公司 Words input method using mobile phone shooting style
WO2009111138A1 (en) * 2008-03-04 2009-09-11 Apple Inc. Handwriting recognition interface on a device
CN103019590A (en) * 2012-11-26 2013-04-03 上海量明科技发展有限公司 Method, client side and system for inputting handwriting characters and character strings

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邓俊: "《基于计算机视觉的手写输入法研究》", 《万方数据企业知识服务平台》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113204283A (en) * 2021-04-30 2021-08-03 Oppo广东移动通信有限公司 Text input method, text input device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN104881149B (en) 2018-04-20

Similar Documents

Publication Publication Date Title
CN111062312B (en) Gesture recognition method, gesture control device, medium and terminal equipment
EP3086206B1 (en) Method, apparatus and computer program product for providing gesture analysis
US8379987B2 (en) Method, apparatus and computer program product for providing hand segmentation for gesture analysis
CN112954210B (en) Photographing method and device, electronic equipment and medium
US9531999B2 (en) Real-time smart display detection system
US9886762B2 (en) Method for retrieving image and electronic device thereof
US20100231529A1 (en) Method and apparatus for selecting text information
WO2017161665A1 (en) Image recognition method, apparatus and device, and nonvolatile computer storage medium
WO2018171047A1 (en) Photographing guide method, device and system
WO2018072271A1 (en) Image display optimization method and device
US20110222775A1 (en) Image attribute discrimination apparatus, attribute discrimination support apparatus, image attribute discrimination method, attribute discrimination support apparatus controlling method, and control program
US11087137B2 (en) Methods and systems for identification and augmentation of video content
US9129177B2 (en) Image cache
US20190155883A1 (en) Apparatus, method and computer program product for recovering editable slide
CN103106388B (en) Method and system of image recognition
CN110463177A (en) The bearing calibration of file and picture and device
WO2023115911A1 (en) Object re-identification method and apparatus, electronic device, storage medium, and computer program product
Xiong et al. Snap angle prediction for 360 panoramas
US11348254B2 (en) Visual search method, computer device, and storage medium
CN109408652B (en) Picture searching method, device and equipment
CN111722717A (en) Gesture recognition method and device and computer readable storage medium
CN104881149A (en) Input method and device based on video stream
CN113271379B (en) Image processing method and device and electronic equipment
KR20150101846A (en) Image classification service system based on a sketch user equipment, service equipment, service method based on sketch and computer readable medium having computer program recorded therefor
CN115731604A (en) Model training method, gesture recognition method, device, equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220725

Address after: 300450 No. 9-3-401, No. 39, Gaoxin 6th Road, Binhai Science Park, Binhai New Area, Tianjin

Patentee after: 3600 Technology Group Co.,Ltd.

Address before: 100088 room 112, block D, 28 new street, new street, Xicheng District, Beijing (Desheng Park)

Patentee before: BEIJING QIHOO TECHNOLOGY Co.,Ltd.

Patentee before: Qizhi software (Beijing) Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230711

Address after: 1765, floor 17, floor 15, building 3, No. 10 Jiuxianqiao Road, Chaoyang District, Beijing 100015

Patentee after: Beijing Hongxiang Technical Service Co.,Ltd.

Address before: 300450 No. 9-3-401, No. 39, Gaoxin 6th Road, Binhai Science Park, Binhai New Area, Tianjin

Patentee before: 3600 Technology Group Co.,Ltd.

TR01 Transfer of patent right