CN107484034A - Caption presentation method, terminal and computer-readable recording medium - Google Patents
Caption presentation method, terminal and computer-readable recording medium Download PDFInfo
- Publication number
- CN107484034A CN107484034A CN201710588083.4A CN201710588083A CN107484034A CN 107484034 A CN107484034 A CN 107484034A CN 201710588083 A CN201710588083 A CN 201710588083A CN 107484034 A CN107484034 A CN 107484034A
- Authority
- CN
- China
- Prior art keywords
- user
- characteristic information
- predeterminable area
- image
- physiological characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/485—End-user interface for client configuration
- H04N21/4856—End-user interface for client configuration for language selection, e.g. for the menu or subtitles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/58—Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Signal Processing (AREA)
- Oral & Maxillofacial Surgery (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention provides a kind of caption presentation method, and this method includes:The image in current preset region is obtained, and is judged according to described image to whether there is user in the predeterminable area;If user be present in the predeterminable area, the physiological characteristic information of the user is obtained according to described image, and determine that the user's uses language according to the physiological characteristic information;The captions using the identical languages of language with the user are set, and show the captions.The present invention also provides a kind of Subtitle Demonstration terminal and computer-readable recording medium.User can be identified by the present invention, judge the language that uses of user, and according to the category of language that captions are set using language of user, so that user can just understand caption content.
Description
Technical field
The present invention relates to multimedia technology field, more particularly to a kind of caption presentation method, terminal and computer-readable deposit
Storage media.
Background technology
With the continuous development of society, in public places often it can be seen that the foreign friends of different nationalities.And in order to
These foreign friends provide the prompting of some guide property, and TV is often provided with public place, to play some guides
The content of property.But the TV of public place is often only able to display a kind of captions of fixed language during use, and
If user is not understood that this spoken and written languages, then the public TV does not just play due effect;For a user, its
The guide information of needs can not be obtained from public TV, reduces user's experience.
The content of the invention
It is a primary object of the present invention to propose a kind of caption presentation method, terminal and computer-readable recording medium, purport
Solve video caption category of language can not adjust automatically technical problem.
To achieve the above object, the present invention provides a kind of caption presentation method, and the caption presentation method includes following step
Suddenly:
The image in current preset region is obtained, and is judged according to described image in the predeterminable area with the presence or absence of use
Family;
If user be present in the predeterminable area, the physiological characteristic information of the user is obtained according to described image, and
Determine that the user's uses language according to the physiological characteristic information;
The captions using the identical languages of language with the user are set, and show the captions.
Optionally, the image obtained in current preset region, and judged according to described image in the predeterminable area
Include with the presence or absence of the step of user:
Obtain the image in current preset region, and Face datection carried out to described image, judge in described image whether
Facial image be present, to judge to whether there is user in the predeterminable area.
Optionally, if user be present in the predeterminable area, the physiology of the user is obtained according to described image
Characteristic information, and being included using the step of language for the user is determined according to the physiological characteristic information:
If user be present in the predeterminable area, the characteristic image of user is obtained in described image, and according to described
Characteristic image obtains the physiological characteristic information of the user;
Presetting database is inquired about, the physiological characteristic information and the characteristic information in the presetting database are compared
It is right, to judge the affiliated nationality of the user, and determine that the user's uses language according to the affiliated nationality of the user.
Optionally, if user be present in the predeterminable area, the physiology of the user is obtained according to described image
Characteristic information, and the step of using language of the user is determined according to the physiological characteristic information, in addition to:
If user be present in the predeterminable area, judge whether the quantity of the user is more than default display number;
If the quantity of the user is more than default display number, determined according to preset rules in the user effective
Family, and obtain the physiological characteristic information of the validated user;
The use language of the validated user is determined according to the physiological characteristic information.
Optionally, if user be present in the predeterminable area, judge whether the quantity of the user is more than and preset
After the step of display number, in addition to:
If the quantity of the user is less than or equal to default display number, the physiological characteristic information of all users is obtained,
And determine that all users' uses language according to the physiological characteristic.
Optionally, the setting and the user and show the step of the captions using the captions of the identical languages of language
After rapid, in addition to:
The secondary image in the predeterminable area is obtained when passing through preset time, and institute is judged according to the secondary image
User is stated whether still in the predeterminable area;
If the user still in the predeterminable area, keeps the languages of the captions constant.
Optionally, the secondary image by being obtained during preset time in the predeterminable area, and according to described two
Secondary image judges the user whether after the step still in the predeterminable area, in addition to:
If the user not in the predeterminable area, judges according to the secondary image
It is no other secondary users to be present;
If other secondary users in the predeterminable area be present, the secondary user is obtained according to the secondary image
Physiological characteristic information, and according to the physiological characteristic information of the secondary user determine the secondary user use language, with
The captions using the identical languages of language with the secondary user are set.
Optionally, the physiological characteristic information includes color development characteristic information, features of skin colors information, face characteristic information.
In addition, to achieve the above object, the present invention also provides a kind of Subtitle Demonstration terminal, and the Subtitle Demonstration terminal includes
Processor, memory and it is stored on the memory and can be by the Subtitle Demonstration program of the computing device, wherein described
When Subtitle Demonstration program is by the computing device, the step of realizing caption presentation method described above.
In addition, to achieve the above object, the present invention also provides a kind of computer-readable recording medium, described computer-readable
Subtitle Demonstration program is stored with storage medium, the Subtitle Demonstration program realizes that captions described above show when being executed by processor
The step of showing method.
The present invention by obtaining the image in current preset region, and judged according to described image be in the predeterminable area
It is no user to be present;If user be present in the predeterminable area, the physiological characteristic information of the user is obtained according to described image,
And determine that the user's uses language according to the physiological characteristic information;Languages identical with the use language of the user are set
Captions, and show the captions.By the way that with upper type, the present invention can be analyzed the user in predeterminable area, according to
Family physiological characteristic judges its speech habits, and carries out captions setting according to its speech habits so that the language of shown captions
The language that user is understood that during speech, the adjust automatically of subtitle language species is realized, facilitate user to read and understand that captions are wrapped
The implication included, improve the experience of user.
Brief description of the drawings
Fig. 1 is the hardware architecture diagram for the Subtitle Demonstration terminal that scheme of the embodiment of the present invention is related to;
Fig. 2 is the schematic flow sheet of caption presentation method first embodiment of the present invention;
If user to be present in the predeterminable area described in Fig. 2, the physiology of the user is obtained according to described image by Fig. 3
Characteristic information, and determine according to the physiological characteristic information step refined flow chart using language of the user;
Fig. 4 is the schematic flow sheet of caption presentation method second embodiment of the present invention.
The realization, functional characteristics and advantage of the object of the invention will be described further referring to the drawings in conjunction with the embodiments.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The main thought of scheme of the embodiment of the present invention is:The image in current preset region is obtained, and according to described image
Judge to whether there is user in the predeterminable area;If user be present in the predeterminable area, institute is obtained according to described image
The physiological characteristic information of user is stated, and judges that the user's uses language according to the physiological characteristic information;Set with it is described
The captions using the identical languages of language of user, and show the captions.
The present embodiments relate to caption presentation method be mainly used in Subtitle Demonstration terminal, Subtitle Demonstration terminal can be with
It is the terminal device that intelligent television, display screen, computer etc. have display playing function.In subsequent descriptions will using intelligent television as
Subtitle Demonstration terminal illustrates.
Reference picture 1, Fig. 1 are the hardware architecture diagram for the Subtitle Demonstration terminal that scheme of the embodiment of the present invention is related to.Such as Fig. 1
Shown, the Subtitle Demonstration terminal of the embodiment of the present invention can include processor 1001 (such as CPU), communication bus 1002, user
Interface 1003, network interface 1004, memory 1005.Wherein, communication bus 1002 is used to realize the connection between these components
Communication;User interface 1003 can include display screen (Display), input block such as keyboard (Keyboard);Network interface
1004 can optionally include wireline interface, the wave point (such as WI-FI interfaces) of standard;Memory 1005 can be at a high speed
RAM memory or stable memory (non-volatile memory), such as magnetic disk storage, memory 1005
The optional storage device that can also be independently of aforementioned processor 1001.
Optionally, Subtitle Demonstration terminal can also include camera, RF (Radio Frequency, radio frequency) circuit, sensing
Device, voicefrequency circuit, WiFi module etc..Wherein, sensor ratio such as optical sensor, motion sensor and other sensors.Tool
Body, optical sensor may include ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to ambient light
Light and shade adjusts the brightness of display screen, and proximity transducer can adjust the bright of display screen according to the distance of sensing equipment and object of reference
Degree.As one kind of motion sensor, gravity accelerometer can detect in all directions (generally three axles) acceleration
Size, size and the direction of gravity are can detect that when static, available for identification terminal posture application (such as horizontal/vertical screen switching,
Dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;Certainly, Subtitle Demonstration is whole
End can also configure the other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared ray sensor, will not be repeated here.
It will be understood by those skilled in the art that the terminal structure shown in Fig. 1 was not formed to Subtitle Demonstration of the present invention end
The restriction at end, can be included than illustrating more or less parts, either combine some parts or different parts arrangement.
With continued reference to Fig. 1, in Fig. 1 as a kind of computer-readable storage medium memory 1005 can include operating system,
Network communication module and Subtitle Demonstration program.
In the terminal shown in Fig. 1, network communication module is mainly used in connection server, and data are carried out with server;And
Processor 1001 can call the Subtitle Demonstration program stored in memory 1005, and perform following operate:
The image in current preset region is obtained, and is judged according to described image in the predeterminable area with the presence or absence of use
Family;
If user be present in the predeterminable area, the physiological characteristic information of the user is obtained according to described image, and
Determine that the user's uses language according to the physiological characteristic information;
The captions using the identical languages of language with the user are set, and show the captions.
Further, the processor 1001 may call upon the Subtitle Demonstration program stored in memory 1005, and hold
Row is following to be operated:
Obtain the image in current preset region, and Face datection carried out to described image, judge in described image whether
Facial image be present, to judge to whether there is user in the predeterminable area.
Further, the processor 1001 may call upon the Subtitle Demonstration program stored in memory 1005, and hold
Row is following to be operated:
If user be present in the predeterminable area, the characteristic image of user is obtained in described image, and according to described
Characteristic image obtains the physiological characteristic information of the user;
Presetting database is inquired about, the physiological characteristic information and the characteristic information in the presetting database are compared
It is right, to judge the affiliated nationality of the user, and determine that the user's uses language according to the affiliated nationality of the user.
Further, the processor 1001 may call upon the Subtitle Demonstration program stored in memory 1005, and hold
Row is following to be operated:
If user be present in the predeterminable area, judge whether the quantity of the user is more than default display number;
If the quantity of the user is more than default display number, determined according to preset rules in the user effective
Family, and obtain the physiological characteristic information of the validated user;
The use language of the validated user is determined according to the physiological characteristic information.
Further, the processor 1001 may call upon the Subtitle Demonstration program stored in memory 1005, and hold
Row is following to be operated:
If the quantity of the user is less than or equal to default display number, the physiological characteristic information of all users is obtained,
And determine that all users' uses language according to the physiological characteristic.
Further, the processor 1001 may call upon the Subtitle Demonstration program stored in memory 1005, and hold
Row is following to be operated:
The secondary image in the predeterminable area is obtained when passing through preset time, and institute is judged according to the secondary image
User is stated whether still in the predeterminable area;
If the user still in the predeterminable area, keeps the languages of the captions constant.
Further, the processor 1001 may call upon the Subtitle Demonstration program stored in memory 1005, and hold
Row is following to be operated:
If the user not in the predeterminable area, judges according to the secondary image
It is no other secondary users to be present;
If other secondary users in the predeterminable area be present, the secondary user is obtained according to the secondary image
Physiological characteristic information, and according to the physiological characteristic information of the secondary user determine the secondary user use language, with
The captions using the identical languages of language with the secondary user are set.
Further, the physiological characteristic information includes color development characteristic information, features of skin colors information, face characteristic information.
Based on the hardware configuration of above-mentioned Subtitle Demonstration terminal, each embodiment of Subtitle Demonstration terminal method of the present invention is proposed.
Reference picture 2, Fig. 2 are the schematic flow sheet of caption presentation method first embodiment of the present invention.
In the present embodiment, the caption presentation method comprises the following steps:
Step S10, obtains the image in current preset region, and according to described image judge in the predeterminable area whether
User be present;
In public places often it can be seen that the foreign friends of different nationalities.And in order to be provided to these foreign friends
The prompting of guide property, TV is often provided with public place, to play the content of some guide property.But public field
TV during use, be often only able to display a kind of captions of fixed language, and if user be not understood that it is this
Spoken and written languages, then the public TV does not just play due effect;For a user, it can not be obtained from public TV
The guide information needed, reduce user's experience.
Based on the above situation, the present embodiment proposes a kind of caption presentation method, and user can be identified for this method, judges
User's uses language, and according to the category of language that captions are set using language of user, so that user can positive convention
Solve caption content.
Specifically, illustrated in the present embodiment with intelligent television.In public places in (such as the place such as station, market)
It is provided with dedicated for playing the intelligent television of guide information, to be convenient for people to, energy is convenient, is rapidly obtained director information.This
A little intelligent televisions can get the image in predeterminable area by camera, normal with the presence or absence of user in predeterminable area to determine
Watch TV.Wherein, camera be able to be integrally disposed on TV body;Certainly can also be with external equipment
Mode and intelligent television it is matching used.When being supported the use in camera in a manner of external equipment with intelligent television, take the photograph
As head can be connected in a wired fashion with intelligent television, naturally it is also possible to be wirelessly to be connected with intelligent television.It is and right
In the quantity of camera, can be one or two or more (herein " and more than " include this number, similarly hereinafter).Pass through these
Camera, the image in predeterminable area can be photographed.Wherein predeterminable area is effective viewing before the display screen of intelligent television
Region, area and shape for the region can specifically be set according to actual conditions.In the present embodiment, camera is right
When the shooting of current preset region is completed, it will the image transmitting obtained by shooting to intelligent television;Intelligent television is getting this
During image, it will carry out recognition of face to the image, judge to whether there is the face image of people in image, so as to judge in image
With the presence or absence of user images;If the face image of people in image be present, it is determined that user images in image be present;If deposited in image
In user images, it is determined that useful family in predeterminable area be present, and think that the user is current and electricity is watched just in the predeterminable area
Depending on.Now enter step S20.
Step S20, if user be present in the predeterminable area, the physiological characteristic of the user is obtained according to described image
Information, and determine that the user's uses language according to the physiological characteristic information;
In the present embodiment, if intelligent television recognizes facial image in the image of predeterminable area, it is determined that predeterminable area
In user be present and watch TV.Now TV will judge the speech habits of user according to image, determine that user is appreciated that
Category of language (languages).Specifically, due to reasons such as environment, in the people of same area life, often with similar life
Feature is managed, and these people are often exchanged using same language.For example, European is usually white people, its eye may
It is blueness, brown etc.;And Chinese are usually yellow, eyes are usually dark brown.Except of course that above-mentioned physiological characteristic, also has
Other physiological characteristics (such as face size, ratio)., can be to the physiological characteristic data of the people of different regions according to this principle
Analyzed, sum up the physiological characteristic of the people of different regions, and by these physiological characteristic input databases;And in storage not
During the physiological characteristic information of the people in same area, the conventional languages (category of language) of the people of storage this area, such as north can be also corresponded to
Beauteously area is corresponding with English, and west Asia is corresponding with Arabic, and CHINESE REGION is corresponding with Chinese.In this way, getting some
The physiological characteristic information of user, you can the characteristic in the physiological characteristic information and database of user is compared, so as to
Judgement is which area belonged to, and determines user nationality, and determine that user's uses language according to user nationality.
In the present embodiment, when intelligent television judges to exist in predeterminable area user, TV will enter traveling one to picture material
The analysis and identification of step, the user images of user are obtained according to the image, and the physiological characteristic of user is obtained according to user images
Information, to determine that the user's uses language according to the physiological characteristic information.The wherein physiological characteristic information of user, including
The color development characteristic information of user, features of skin colors information, face characteristic information (face size, ratio etc.), resemblance information etc..
TV can inquire about database when getting these physiological characteristic informations, by the information in the physiological characteristic information and database
It is compared, judges the affiliated nationality of user;It is determined that user nationality when, you can determine user use language.
Step S30, the captions using the identical languages of language with the user are set, and show the captions.
In the present embodiment, intelligent television it is determined that user category of language when, that is, the target languages of captions is determined, now
Can target language progress captions setting.Specifically, intelligent television will obtain source word curtain to be shown, target language is transcribed into
The captions of kind.When translating completion, intelligent television can show the captions of the target language.Certainly, when showing the captions, also
Untranslated source word curtain can be shown.For example, source word curtain is Chinese subtitle, and the user in current preset region is American;Intelligence
Energy TV will translate Chinese source word curtain, and obtain English subtitles when determining the speech habits of user according to image;
During display, intelligent television will together show Chinese source word curtain and English subtitles.
Further, intelligent television is when image judges to whether there is user in predeterminable area, if not identifying in the picture
To facial image, then it can judge user is not present in predeterminable area, now intelligent television is not carried out follow-up physiological characteristic letter
The operation such as breath acquisition and caption translating.
Further, intelligent television is determining the process using language of user according to the physiological characteristic information of user
In, if failing to judge the speech habits of user according to the physiological characteristic of user, now intelligent television will not change source word curtain
Languages, and directly display source word curtain.It is certainly contemplated that widely using to English, when the speech habits of user can not be determined,
Source word curtain directly can also be translated into English subtitles, and the source word curtain and English subtitles are together shown.
In the present embodiment, the image in current preset region is obtained, and judge in the predeterminable area according to described image
With the presence or absence of user;If user be present in the predeterminable area, the physiological characteristic that the user is obtained according to described image is believed
Breath, and determine that the user's uses language according to the physiological characteristic information;Set identical using language with the user
The captions of languages, and show the captions.By that can be analyzed with upper type, the present embodiment the user in predeterminable area,
Its speech habits is judged according to user's physiological characteristic, and captions setting is carried out according to its speech habits so that shown word
The language that user is understood that during the language of curtain, the adjust automatically of subtitle language species is realized, facilitate user to read and understand word
Implication included by curtain, improves the experience of user.
Reference picture 3, if Fig. 3 obtains the use user to be present in the predeterminable area described in Fig. 2, according to described image
The physiological characteristic information at family, and determine that the user's refines flow using the step of language according to the physiological characteristic information
Figure.
Based on above-mentioned embodiment illustrated in fig. 2, step S20 includes:
Step S21, if user be present in the predeterminable area, judge whether the quantity of the user is more than default display
Quantity;
Step S22, if the quantity of the user is more than default display number, according to preset rules in the user really
Determine validated user, and obtain the physiological characteristic information of the validated user;
Step S23, the use language of the validated user is determined according to the physiological characteristic information;
Step S24, if the quantity of the user is less than or equal to default display number, the physiology for obtaining all users is special
Reference ceases, and determines that all users' uses language according to the physiological characteristic.
In the present embodiment, if the image that intelligent television is shot according to camera is judging to there are user in predeterminable area
TV is watched, then intelligent television will be analyzed image, to judge the number of users in predeterminable area.Specifically, this implementation
The method of recognition of face can be used to be judged that is, intelligent television will have how many facial images in detection image in example,
So that it is determined that the quantity of user.It is determined that the quantity number of user, will determine that whether the quantity of user is more than default display number.Its
In to preset display number be a preset value, it represents that the maximum quantities of captions languages can be shown simultaneously.For example, the default display number
Measure as 2, then it represents that intelligent television can at most show macaronic captions simultaneously.Can be root to display number should be preset
Determined according to the size of screen and the image content of broadcasting, to ensure, when showing the captions of multilingual, not interfering with
The normal viewing of other image contents.
In the present embodiment, intelligent television it is determined that user quantity when, number of users and default display number can be carried out
Compare, judge whether the quantity of user is more than default display number.If the quantity of user is more than default display number, illustrate
Intelligent television can not show the captions of all language simultaneously;Now intelligent television will determine having for predetermined number in all users
Effectiveness family, and the physiological characteristic information of validated user is obtained, to be determined to use language according to the physiological characteristic information of validated user,
And carry out follow-up captions and set.Wherein, established rules really for validated user then, can be configured according to actual conditions
's.For example, it may be enter the time determination of predeterminable area according to user.For example, camera is with video mode when shooting
Shot, acquisition is video image;Intelligent television determines shared two users of a and b in preset areas according to the video image
Domain, wherein a user are that predeterminable area is entered in 10 o'clock sharps, b user be 10 points 2 minutes into predeterminable area;And due to default
Display number is 1, i.e., intelligent television can at most show a kind of captions of languages;Then now intelligent television will be in two use of a and b
A validated user is determined in family, and obtains the physiological characteristic information of the validated user to determine that it uses language;Now intelligence
The relatively early a user for entering predeterminable area will be defined as validated user by TV, and obtain the physiological characteristic information of a user, with
Determine that a user's uses language according to the physiological characteristic information of a user.It is, of course, also possible to determined according to user present position
Validated user.For example, predeterminable area includes central area and non-central region, it is more than default display number in the quantity of user
When, the user in central area can be defined as validated user by intelligent television, and obtain the physiological characteristic letter of the validated user
Cease to determine that it uses language.
Further, if the quantity of user is less than or equal to default display number, intelligent television can obtain all users
Physiological characteristic information, and according to these physiological characteristic informations determine respectively each user use language, with according to each user's
A variety of languages captions are set using language, and shown simultaneously.
Further, it is first to judge the magnitude relationship between the quantity of user and predetermined number in the present embodiment, then root
Determine to need the validated user for obtaining physiological characteristic information according to both magnitude relationships;It can also be and first obtain institute in predeterminable area
There is the physiological characteristic information of user, and determine that each user's uses language according to these physiological characteristic informations;Then it is determined that this
Whether the total quantity of the language of a little users is more than predetermined number;If the total quantity of language is more than predetermined number, in these language
Middle determination effective language, and captions setting is carried out according to effective language;If the total quantity of language is less than or equal to predetermined number,
A variety of languages captions directly are set according to these language forms, and shown simultaneously., can wherein for the quantity of effective language
Being determined according to the number of user, for example, predetermined number is 2, there are 5 people to say English in predeterminable area, 7 people are right
Text, 1 people say Japanese, then English and Chinese can be defined as to effective language, and carry out corresponding captions setting.Certainly specific real
Shi Zhong, effective language are established rules really, can also be other contents.
Reference picture 4, Fig. 4 are the schematic flow sheet of caption presentation method second embodiment of the present invention.
Based on above-mentioned embodiment illustrated in fig. 2, after step S30, in addition to:
Step S40, the secondary image in the predeterminable area is obtained when passing through preset time, and according to the quadratic diagram
As judging the user whether still in the predeterminable area;
Step S50, if the user still in the predeterminable area, keeps the languages of the captions constant;
Step S60, if the user not in the predeterminable area, judges described default according to the secondary image
With the presence or absence of other secondary users in region;
Step S70, if other secondary users in the predeterminable area be present, according to obtaining the secondary image
The physiological characteristic information of secondary user, and determine according to the physiological characteristic information of the secondary user use of the secondary user
Language, to set the captions with the secondary user using the identical languages of language.
In the present embodiment, intelligent television, can also be by camera to predeterminable area after the captions of a certain languages are shown
Continue to monitor, adjust the languages of captions in real time according to the change of the user of the viewing TV in predeterminable area.Specifically, intelligence
Can TV by image in predeterminable area can be obtained during preset time again by camera, in order to by this image and step
Image in rapid S10 distinguishes, and image herein can be described as secondary image;According to secondary image, TV can determine whether that original subscriber is
It is no still to stay in viewing TV in predeterminable area.If original subscriber, still in predeterminable area, TV will be considered to original subscriber and still see
TV is seen, in order to ensure that original subscriber can smoothly understand caption content, the languages of captions is not adjusted currently, preserved current
The inconvenience of captions state.And if original subscriber's not in predeterminable area (or have left predeterminable area), then TV will be considered to original subscriber
No longer reading sub titles, whether now TV will detect other users in predeterminable area, this other can be described as it is secondary
User;If secondary user be present, TV can obtain the physiological characteristic information of secondary user, special with the physiology according to secondary user
Sign determines the language that uses of the secondary user, and sets the captions with secondary user using the identical languages of language, guarantee two
Secondary user can smoothly understand caption content.Wherein, the specific steps details such as feature obtains and user language determines, can refer to step
S10 and step S20, here is omitted;Shooting for camera, it can carry out always, i.e., camera has been at
Shooting state, what is got is the real-time monitor video image of predeterminable area, naturally it is also possible to is shot every some cycles
Photograph image;And for recognition of face of the TV to image and user's detection process, it is similar, can be to monitor video image
Carry out Real time identification and detection, naturally it is also possible to be every some cycles once identify and detect.
In the present embodiment, intelligent television, can also be by camera to predeterminable area after the captions of a certain languages are shown
Continue to monitor, the languages of captions are adjusted in real time according to the change of the user of the viewing TV in predeterminable area, to meet public affairs
The captions reading requirement for the crowd that place is constantly flowed altogether.
The present invention also provides a kind of computer-readable recording medium.
Subtitle Demonstration program is stored with computer-readable recording medium of the present invention, the Subtitle Demonstration program is by processor
Realized during execution such as the step of above-mentioned caption presentation method.
Wherein, the method realized when Subtitle Demonstration program is performed can refer to each reality of caption presentation method of the present invention
Example is applied, here is omitted.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row
His property includes, so that process, method, article or system including a series of elements not only include those key elements, and
And also include the other element being not expressly set out, or also include for this process, method, article or system institute inherently
Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this
Other identical element also be present in the process of key element, method, article or system.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on such understanding, technical scheme is substantially done to prior art in other words
Going out the part of contribution can be embodied in the form of software product, and the computer software product is stored in one as described above
In storage medium (such as ROM/RAM, magnetic disc, CD), including some instructions to cause a station terminal equipment (can be mobile phone,
Computer, server, air conditioner, or network equipment etc.) perform method described in each embodiment of the present invention.
The preferred embodiments of the present invention are these are only, are not intended to limit the scope of the invention, it is every to utilize this hair
The equivalent structure or equivalent flow conversion that bright specification and accompanying drawing content are made, or directly or indirectly it is used in other related skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of caption presentation method, it is characterised in that the caption presentation method comprises the following steps:
The image in current preset region is obtained, and is judged according to described image to whether there is user in the predeterminable area;
If user be present in the predeterminable area, according to the physiological characteristic information of the described image acquisition user, and according to
The physiological characteristic information determines that the user's uses language;
The captions using the identical languages of language with the user are set, and show the captions.
2. caption presentation method as claimed in claim 1, it is characterised in that the image obtained in current preset region,
And the step of judging according to described image and whether there is user in the predeterminable area, includes:
The image in current preset region is obtained, and Face datection is carried out to described image, judges to whether there is in described image
Facial image, to judge to whether there is user in the predeterminable area.
3. caption presentation method as claimed in claim 1, it is characterised in that if user be present in the predeterminable area,
The physiological characteristic information of the user is then obtained according to described image, and determines the user's according to the physiological characteristic information
Included using the step of language:
If user be present in the predeterminable area, the characteristic image of user is obtained in described image, and according to the feature
Image obtains the physiological characteristic information of the user;
Presetting database is inquired about, the physiological characteristic information is compared with the characteristic information in the presetting database, with
Judge the affiliated nationality of the user, and determine that the user's uses language according to the affiliated nationality of the user.
4. caption presentation method as claimed in claim 1, it is characterised in that if user be present in the predeterminable area,
The physiological characteristic information of the user is then obtained according to described image, and determines the user's according to the physiological characteristic information
The step of using language, in addition to:
If user be present in the predeterminable area, judge whether the quantity of the user is more than default display number;
If the quantity of the user is more than default display number, validated user is determined in the user according to preset rules,
And obtain the physiological characteristic information of the validated user;
The use language of the validated user is determined according to the physiological characteristic information.
5. caption presentation method as claimed in claim 4, it is characterised in that if user be present in the predeterminable area,
After then judging the step of whether quantity of the user is more than default display number, in addition to:
If the quantity of the user is less than or equal to default display number, the physiological characteristic information of all users, and root are obtained
Determine that all users' uses language according to the physiological characteristic.
6. caption presentation method as claimed in claim 1, it is characterised in that the setting uses language phase with the user's
With the captions of languages, and the step of show the captions after, in addition to:
The secondary image in the predeterminable area is obtained when passing through preset time, and the use is judged according to the secondary image
Whether family is still in the predeterminable area;
If the user still in the predeterminable area, keeps the languages of the captions constant.
7. caption presentation method as claimed in claim 6, it is characterised in that described described pre- by being obtained during preset time
If the secondary image in region, and according to the secondary image judge the user whether the step still in the predeterminable area
Afterwards, in addition to:
If the user not in the predeterminable area, judges whether deposited in the predeterminable area according to the secondary image
In other secondary users;
If other secondary users in the predeterminable area be present, the life of the secondary user is obtained according to the secondary image
Characteristic information is managed, and determines that the secondary user's uses language according to the physiological characteristic information of the secondary user, to set
Captions with the secondary user using the identical languages of language.
8. the caption presentation method as any one of claim 1 to 7, it is characterised in that the physiological characteristic information bag
Include color development characteristic information, features of skin colors information, face characteristic information.
9. a kind of Subtitle Demonstration terminal, it is characterised in that the Subtitle Demonstration terminal includes processor, memory and is stored in institute
State on memory and can be by the Subtitle Demonstration program of the computing device, wherein the Subtitle Demonstration program is by the processor
During execution, the step of realizing caption presentation method as any one of claim 1 to 8.
10. a kind of computer-readable recording medium, it is characterised in that be stored with captions on the computer-readable recording medium and show
Show program, when the Subtitle Demonstration program is executed by processor, realize that the captions as any one of claim 1 to 8 show
The step of showing method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710588083.4A CN107484034A (en) | 2017-07-18 | 2017-07-18 | Caption presentation method, terminal and computer-readable recording medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710588083.4A CN107484034A (en) | 2017-07-18 | 2017-07-18 | Caption presentation method, terminal and computer-readable recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107484034A true CN107484034A (en) | 2017-12-15 |
Family
ID=60596285
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710588083.4A Pending CN107484034A (en) | 2017-07-18 | 2017-07-18 | Caption presentation method, terminal and computer-readable recording medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107484034A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108694394A (en) * | 2018-07-02 | 2018-10-23 | 北京分音塔科技有限公司 | Translator, method, apparatus and the storage medium of recognition of face |
CN109309777A (en) * | 2018-08-15 | 2019-02-05 | 罗勇 | It repeats scene image and is grouped platform |
CN109600680A (en) * | 2018-08-15 | 2019-04-09 | 罗勇 | Repeat scene image group technology |
CN109905756A (en) * | 2019-01-17 | 2019-06-18 | 平安科技(深圳)有限公司 | TV subtitling dynamic creation method and relevant device based on artificial intelligence |
CN109977866A (en) * | 2019-03-25 | 2019-07-05 | 联想(北京)有限公司 | Content translation method and device, computer system and computer readable storage medium |
CN110033428A (en) * | 2018-08-23 | 2019-07-19 | 永康市胜时电机有限公司 | Captions adding system based on ethnic group detection |
CN110519620A (en) * | 2019-08-30 | 2019-11-29 | 三星电子(中国)研发中心 | Recommend the method and television set of TV programme in television set |
CN110718144A (en) * | 2019-11-07 | 2020-01-21 | 毛春根 | Advertising board |
CN110853498A (en) * | 2019-11-07 | 2020-02-28 | 毛春根 | Scenic spot indicating device |
CN111208968A (en) * | 2018-11-21 | 2020-05-29 | 余姚市展翰电器有限公司 | Toilet bowl service language switching platform |
WO2020261078A1 (en) * | 2019-06-25 | 2020-12-30 | International Business Machines Corporation | Cognitive modification of verbal communications from an interactive computing device |
CN114503546A (en) * | 2019-11-11 | 2022-05-13 | 深圳市欢太科技有限公司 | Subtitle display method, device, electronic equipment and storage medium |
CN117593949A (en) * | 2024-01-19 | 2024-02-23 | 成都金都超星天文设备有限公司 | Control method, equipment and medium for astronomical phenomena demonstration of astronomical phenomena operation |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103916711A (en) * | 2014-03-31 | 2014-07-09 | 小米科技有限责任公司 | Method and device for playing video signals |
CN104412606A (en) * | 2012-06-29 | 2015-03-11 | 卡西欧计算机株式会社 | Content playback control device, content playback control method and program |
CN104602131A (en) * | 2015-02-16 | 2015-05-06 | 腾讯科技(北京)有限公司 | Barrage processing method and system |
CN104902333A (en) * | 2014-09-19 | 2015-09-09 | 腾讯科技(深圳)有限公司 | Video comment processing method and video comment processing device |
CN105049950A (en) * | 2014-04-16 | 2015-11-11 | 索尼公司 | Method and system for displaying information |
-
2017
- 2017-07-18 CN CN201710588083.4A patent/CN107484034A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104412606A (en) * | 2012-06-29 | 2015-03-11 | 卡西欧计算机株式会社 | Content playback control device, content playback control method and program |
CN103916711A (en) * | 2014-03-31 | 2014-07-09 | 小米科技有限责任公司 | Method and device for playing video signals |
CN105049950A (en) * | 2014-04-16 | 2015-11-11 | 索尼公司 | Method and system for displaying information |
CN104902333A (en) * | 2014-09-19 | 2015-09-09 | 腾讯科技(深圳)有限公司 | Video comment processing method and video comment processing device |
CN104602131A (en) * | 2015-02-16 | 2015-05-06 | 腾讯科技(北京)有限公司 | Barrage processing method and system |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108694394A (en) * | 2018-07-02 | 2018-10-23 | 北京分音塔科技有限公司 | Translator, method, apparatus and the storage medium of recognition of face |
CN109309777A (en) * | 2018-08-15 | 2019-02-05 | 罗勇 | It repeats scene image and is grouped platform |
CN109600680A (en) * | 2018-08-15 | 2019-04-09 | 罗勇 | Repeat scene image group technology |
CN109309777B (en) * | 2018-08-15 | 2019-05-10 | 上海极链网络科技有限公司 | It repeats scene image and is grouped platform |
CN109600680B (en) * | 2018-08-15 | 2019-06-28 | 上海极链网络科技有限公司 | Repeat scene image group technology |
CN110033428A (en) * | 2018-08-23 | 2019-07-19 | 永康市胜时电机有限公司 | Captions adding system based on ethnic group detection |
CN111208968A (en) * | 2018-11-21 | 2020-05-29 | 余姚市展翰电器有限公司 | Toilet bowl service language switching platform |
CN109905756B (en) * | 2019-01-17 | 2021-11-12 | 平安科技(深圳)有限公司 | Television caption dynamic generation method based on artificial intelligence and related equipment |
WO2020147394A1 (en) * | 2019-01-17 | 2020-07-23 | 平安科技(深圳)有限公司 | Method employing artificial intelligence for dynamic generation of television closed captions, and related apparatus |
CN109905756A (en) * | 2019-01-17 | 2019-06-18 | 平安科技(深圳)有限公司 | TV subtitling dynamic creation method and relevant device based on artificial intelligence |
CN109977866A (en) * | 2019-03-25 | 2019-07-05 | 联想(北京)有限公司 | Content translation method and device, computer system and computer readable storage medium |
CN109977866B (en) * | 2019-03-25 | 2021-04-13 | 联想(北京)有限公司 | Content translation method and device, computer system and computer readable storage medium |
WO2020261078A1 (en) * | 2019-06-25 | 2020-12-30 | International Business Machines Corporation | Cognitive modification of verbal communications from an interactive computing device |
US11315544B2 (en) | 2019-06-25 | 2022-04-26 | International Business Machines Corporation | Cognitive modification of verbal communications from an interactive computing device |
CN110519620A (en) * | 2019-08-30 | 2019-11-29 | 三星电子(中国)研发中心 | Recommend the method and television set of TV programme in television set |
CN110718144A (en) * | 2019-11-07 | 2020-01-21 | 毛春根 | Advertising board |
CN110853498A (en) * | 2019-11-07 | 2020-02-28 | 毛春根 | Scenic spot indicating device |
CN114503546A (en) * | 2019-11-11 | 2022-05-13 | 深圳市欢太科技有限公司 | Subtitle display method, device, electronic equipment and storage medium |
CN117593949A (en) * | 2024-01-19 | 2024-02-23 | 成都金都超星天文设备有限公司 | Control method, equipment and medium for astronomical phenomena demonstration of astronomical phenomena operation |
CN117593949B (en) * | 2024-01-19 | 2024-03-29 | 成都金都超星天文设备有限公司 | Control method, equipment and medium for astronomical phenomena demonstration of astronomical phenomena operation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107484034A (en) | Caption presentation method, terminal and computer-readable recording medium | |
US10366691B2 (en) | System and method for voice command context | |
US20180285544A1 (en) | Method for adaptive authentication and electronic device supporting the same | |
US9326675B2 (en) | Virtual vision correction for video display | |
KR102333101B1 (en) | Electronic device for providing property information of external light source for interest object | |
US9965860B2 (en) | Method and device for calibration-free gaze estimation | |
CN108712603B (en) | Image processing method and mobile terminal | |
US20190244369A1 (en) | Display device and method for image processing | |
CN108377422B (en) | Multimedia content playing control method, device and storage medium | |
US20180181811A1 (en) | Method and apparatus for providing information regarding virtual reality image | |
JP2010067104A (en) | Digital photo-frame, information processing system, control method, program, and information storage medium | |
KR20200092465A (en) | Method for recommending contents and electronic device therefor | |
KR102037419B1 (en) | Image display apparatus and operating method thereof | |
CN108712674A (en) | Video playing control method, playback equipment and storage medium | |
US20200166996A1 (en) | Display apparatus and controlling method thereof | |
CN117032612B (en) | Interactive teaching method, device, terminal and medium based on high beam imaging learning machine | |
US20210117048A1 (en) | Adaptive assistive technology techniques for computing devices | |
US11915671B2 (en) | Eye gaze control of magnification user interface | |
CN105872845A (en) | Method for intelligently regulating size of subtitles and television | |
US11128909B2 (en) | Image processing method and device therefor | |
KR20170033549A (en) | Display device, method for controlling the same and computer-readable recording medium | |
KR20220127568A (en) | Method for providing home tranninig service and a display apparatus performing the same | |
US20180366089A1 (en) | Head mounted display cooperative display system, system including dispay apparatus and head mounted display, and display apparatus thereof | |
CN117636767A (en) | Image display method, system, terminal and storage medium of high beam imaging learning machine | |
CN112511890A (en) | Video image processing method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171215 |