CN106203332A - Method and system based on the change of intelligent robot visual identity face facial expression - Google Patents
Method and system based on the change of intelligent robot visual identity face facial expression Download PDFInfo
- Publication number
- CN106203332A CN106203332A CN201610537523.9A CN201610537523A CN106203332A CN 106203332 A CN106203332 A CN 106203332A CN 201610537523 A CN201610537523 A CN 201610537523A CN 106203332 A CN106203332 A CN 106203332A
- Authority
- CN
- China
- Prior art keywords
- face
- facial expression
- information
- expression
- change
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of method and system based on the change of intelligent robot visual identity face facial expression, intelligent robot is provided with robot operating system, and described robot operating system performs: receive from the face facial expression change node instruction acquired in the first application;Circulate according to described instruction calls vision module to obtain image information, the human facial expression information in record image information;Described human facial expression information according to record judges whether the facial expression of face changes, the feedback node information when identifying the node obtaining the change of face facial expression.The method can identify the emotion change of user in time, thus preferably realizes from capturing, and improves the interactive experience of user.
Description
Technical field
The invention belongs to field in intelligent robotics, particularly relate to a kind of based on intelligent robot visual identity face face table
The method and system of end of love.
Background technology
Along with intelligent robot multi-modal mutual in extensive application, its performance is proposed new requirement.Such as, right
In having the intelligent robot of camera function, do not require nothing more than it and can shoot photo, more require that it has from candid photograph ability.
Most important to the quality of capture pictures from the timing node captured, but owing to being usually from the timing node captured
Dynamically obtain, it should when capture and be difficult to control, therefore need a kind of can being precisely controlled from the method capturing timing node badly.
Summary of the invention
One of the technical problem to be solved is to need to provide one can be precisely controlled from capturing timing node
Method.
In order to solve above-mentioned technical problem, embodiments herein provide firstly a kind of based on the knowledge of intelligent robot vision
The method of others' face portion expression shape change, described intelligent robot is provided with robot operating system, and described robot manipulation is
System performs: receive from the face facial expression change node instruction acquired in the first application;According to described instruction calls vision
The Module cycle human facial expression information to obtain image information, in record image information;Described human face expression letter according to record
Breath judges whether the facial expression of face changes, the feedback node letter when identifying the node obtaining the change of face facial expression
Breath.
Preferably, acquired image information performs acquisition in the following way: carry out the every two field picture obtained respectively
Identify, determine and record the face number included in each two field picture and the emotion value corresponding with each face.
Preferably, the face number included in each two field picture and the emotion corresponding with each face are determined and recorded described in
Value, including: when the face number included in described image is more than or equal to one, obtain the expression letter of each face successively
Breath;It is respectively compared the expression of each face and the human face expression preset, by the human face expression institute that preset most like with described face
Corresponding numerical value is as the emotion value of this face.
Preferably, the described described human facial expression information according to record judges whether the facial expression of face changes,
Including: compare each face emotion value in adjacent two two field pictures successively;When value of being in a bad mood changes, it is judged that face
Facial expression changes.
The similarity system design being preferably based on image judges the human face expression preset most like with described face.
Embodiments herein additionally provide a kind of based on intelligent robot visual identity face facial expression change be
System, described intelligent robot is provided with robot operating system, and the system of this identification face facial expression change includes: instruction connects
Receiving module, it receives from the face facial expression change node instruction acquired in the first application;Human facial expression information identification mould
Block, it circulates according to described instruction calls vision module to obtain image information, records the human facial expression information in image information;
Judging feedback module, according to the described human facial expression information of record, it judges whether the facial expression of face changes, work as knowledge
Do not obtain feedback node information during the node of face facial expression change.
Preferably, described human facial expression information identification module obtains image information in the following way: respectively to acquisition
Every two field picture is identified, and determines and records the face number included in each two field picture and the emotion corresponding with each face
Value.
Preferably, described human facial expression information identification module determines and records included in each two field picture according to following steps
Face number and the emotion value corresponding with each face: the face number included in described image be more than or equal to one
Time, obtain the expression information of each face successively;It is respectively compared the expression of each face and the human face expression preset, will be with described people
The most like numerical value corresponding to human face expression preset of face is as the emotion value of this face.
Preferably, described judge according to following steps, feedback module judges whether the facial expression of face changes: depend on
Secondary comparison each face emotion value in adjacent two two field pictures;When value of being in a bad mood changes, it is judged that the facial table of face
Feelings change.
Preferably, described judgement feedback module similarity system design based on image judges preset most like with described face
Human face expression.
Compared with prior art, the one or more embodiments in such scheme can have the advantage that or useful effect
Really:
Image information is persistently obtained by calling vision module, and by analyzing and the feelings of the expression of face in movement images
Thread value obtains the change node of face emotion, thinks that system application provides correct time nodal information.The method can and
Time identify user emotion change, thus preferably realize from capture, improve the interactive experience of user.
Other advantages of the present invention, target, and feature will be illustrated to a certain extent in the following description, and
And to a certain extent, will be apparent to those skilled in the art based on to investigating hereafter, or can
To be instructed from the practice of the present invention.The target of the present invention and other advantages can pass through description below, and right is wanted
The structure asking specifically noted in book, and accompanying drawing realizes and obtains.
Accompanying drawing explanation
Accompanying drawing is used for providing being further appreciated by of the technical scheme to the application or prior art, and constitutes description
A part.Wherein, the accompanying drawing expressing the embodiment of the present application is used for explaining the technical side of the application together with embodiments herein
Case, but it is not intended that the restriction to technical scheme.
Fig. 1 is the side based on the change of intelligent robot visual identity face facial expression according to first embodiment of the invention
The schematic flow sheet of method;
Fig. 2 is the side based on the change of intelligent robot visual identity face facial expression according to second embodiment of the invention
The schematic flow sheet of method;
Fig. 3 is
The structural representation of system.
Detailed description of the invention
Describe embodiments of the present invention in detail below with reference to drawings and Examples, whereby how the present invention is applied
Technological means solves technical problem, and the process that realizes reaching relevant art effect can fully understand and implement according to this.This Shen
Please each feature in embodiment and embodiment, can be combined with each other under not colliding premise, the technical scheme formed
All within protection scope of the present invention.
Embodiments provide a kind of visual performance utilizing intelligent robot system and be obtained from the time point of candid photograph
Method.
First embodiment:
Fig. 1 is the side based on the change of intelligent robot visual identity face facial expression according to first embodiment of the invention
The schematic flow sheet of method, as it can be seen, the method comprises the following steps:
Step S110, reception are from the face facial expression change node instruction acquired in the first application.
Step S120, circulate to obtain image information according to above-mentioned instruction calls vision module, record in image information
Human facial expression information.
Step S130, according to record human facial expression information judge whether the facial expression of face changes, work as identification
Obtain feedback node information during the node of face facial expression change.
Concrete, capture after APP is generally basede on the change that there occurs setting in the image received and capture, such as,
When the expression of the face in image changes when.Therefore correct capture it is crucial that accurately know in image and change
Time point.
Intelligent robot is provided with intelligent robot system, and intelligent robot system may be used for managing intelligent robot
The co-ordination of software and hardware, including calling the various sensing units of intelligent robot, such as vision module (photographic head).Therefore,
By intelligent robot system, photographic head can be called the change assisting candid photograph APP to obtain image information.
In step s 110, candid photograph APP, starting candid photograph function when, first can send request to intelligent robot system
Obtaining the instruction of face facial expression change node, intelligent robot system is according to this acquisition request face facial expression change joint
The instruction calls photographic head of point, constantly obtains the image information of preview in photographic head.Photographic head can be persistently to a range of
Environment is found a view, so intelligent robot system can constantly get the image information of preview.
It follows that in the step s 120, obtain face information therein by image information is analyzed to.Specifically
, it is that each two field picture obtained is analyzed respectively.The each two field picture received, it is possible to not comprising face, also having can
More than one face can be comprised, when analyzing image, for comprising the image of face, need each face in a two field picture
Analyze successively, and record analysis result is so that follow-up comparison.
In step s 130, judge whether the facial expression of face there occurs change according to the human facial expression information of record.
The human facial expression information of record is multiple image information, relatively the face in adjacent two two field pictures, when face is at adjacent two frame figures
During expression difference in Xiang, then may determine that human face expression there occurs change, regarded as face facial expression further and become
The node changed, after judging the node that face facial expression changes, this nodal information feedback is back to by intelligent robot system
Send the application of the request of acquisition.
As it was previously stated, in two adjacent two field pictures, it is possible to there are same face simultaneously, then if this face
Facial expression there occurs change, then may determine that the moment changed is timing node.It is also possible at previous frame image
Present in face, a later frame image will no longer be present (owing to personage moves out the scope that photographic head catches), or phase
Instead, non-existent face in previous frame image, exist in a later frame image, in this case, when image does not comprise people
During face, intelligent robot system will not record corresponding face information, say, that at the ratio carrying out two adjacent two field pictures
Time relatively, there is an object compared for sky, now can also be judged as that face facial expression there occurs change according to setting, and
By intelligent robot system feedback node information.
Candid photograph APP, after the nodal information receiving intelligent robot system feedback, can perform according to this nodal information
Capture function.
Owing to intelligent robot system is persistently to refresh human facial expression information, and persistently to the human facial expression information in image
Compare to obtain timing node, if all judging to obtain the time between adjacent two two field pictures of therefore continuous multiple frames image
Nodal information, then intelligent robot system can be to applying return node information continuously, then capturing APP can be according to the most anti-
The nodal information of feedback realizes continuous shooting.
The method of the embodiment of the present invention, proposes a kind of face that can pick out more accurately and the time segmentum intercalaris of expression shape change occurs
Point, thus preferably realize from the method captured, it is possible to identify the emotion change of user in time, optimize application and system merit
Energy.
Second embodiment:
Fig. 2 is the side based on the change of intelligent robot visual identity face facial expression according to second embodiment of the invention
The schematic flow sheet of method, in this embodiment, is analyzed image by obtaining the emotion value corresponding with face in image,
Describe in detail below in conjunction with embodiment.
As in figure 2 it is shown, capture APP to send the finger of acquisition request face facial expression change node to intelligent robot system
Order, intelligent robot system, according to the request of candid photograph APP, is called photographic head and is obtained the image information of preview.This step perform with
The operation that in first embodiment, step S110 is identical, repeats no more.
The every two field picture obtained is identified by intelligent robot system respectively, determines and records included in each two field picture
Face number and the emotion value corresponding with each face.
In the present embodiment, the emotion of people that emotion value refers to be passed on face information carries out the knot obtained that quantizes
Really.Such as the emotion value of smile is set to 0, the emotion value shed tears is set to 1, the emotion value of indignation is set to 2 etc..With
Emotion value record face information, can either clearly illustrate the different emotional characteristicss expressed by face, can simplify again information
Storage.
Further, in the face number determined and recorded included in each two field picture and the emotion corresponding with each face
During value, add up the face number included in every two field picture the most respectively, when face number is more than or equal to one, obtain successively
The expression information of each face.Then the expression by each face relatively determines the feelings of each face compared with default human face expression
Thread value.
The human face expression preset is the representative value of the human face expression prestored by intelligent robot system, and every kind default
Human face expression corresponds to an emotion value said before.By the human face expression institute that preset most like with the human face expression obtained
Corresponding emotion value is as the emotion value of the face compared.
For example, if obtain human face expression with preset represent smile human face expression most like, then will with in advance
If represent that emotion value corresponding to human face expression smiled is defined as the emotion value of the human face expression compared.Utilize above-mentioned side
Method, can store the numerical value that everyone facial expression information in every two field picture is converted to determine.
It follows that compare each face emotion value in adjacent two two field pictures successively, when value of being in a bad mood changes,
Judge that the facial expression of face changes.
In two adjacent two field pictures, first count the number of face, when image only comprises a face, compare
This face emotion value in adjacent two two field pictures, when emotion value is unequal, illustrates that the facial expression of this face there occurs change
Change, then may determine that the moment changed is timing node.
Further, when image comprises multiple face, adding up the number of face when, face is numbered,
Determine the emotion value of each face the most respectively, and the emotion value numbering identical face in adjacent two two field pictures is compared.
Unequal if there is certain comparative result numbered, illustrate that the facial expression of this face there occurs change, then may determine that generation
The moment of change is timing node.
It addition, reality is also possible to face present in the previous frame image, will no longer be present in a later frame image
(due to personage move out photographic head catch scope), in this case, this face have in previous frame image one the most right
The emotion value answered, the emotion value in a later frame image is empty, and therefore result of the comparison is for judge that face facial expression there occurs
Change, and by intelligent robot system feedback node information.
It is also possible to situation is contrary in reality, non-existent face in previous frame image, exist in a later frame image.
In this case, this face does not record corresponding emotion value in previous frame image, has therewith in a later frame image
Corresponding emotion value, the most according to a preconcerted arrangement, the face determining emotion value occurred first is judged as that face facial expression occurs
Change, and by intelligent robot system feedback node information.
After judging the node that face facial expression changes, this nodal information feedback is back to send out by intelligent robot system
Go out to obtain the application (capturing APP) of request.Candid photograph APP is after the nodal information receiving intelligent robot system feedback, permissible
Perform to capture function according to this nodal information.
The method of the present embodiment, utilizes the emotion value corresponding with human face expression to obtain and record face expression shape change
Timing node, it is possible to simplify and compare flow process, reduce the data volume of storage simultaneously.
It should be noted that in first embodiment and the second embodiment when carrying out the comparison of image, can be according to image
Similarity system design method judge and the most like face of human face expression preset.Phase commonly used in the prior art can be used
Implementing to compare like property measure, the similarity system design method of the embodiment of the present invention image to being used does not limits.
3rd embodiment:
Fig. 3 is
The structural representation of system, is provided with robot operating system, the system of this others' face portion expression shape change on intelligent robot
Including:
Command reception module 31, it receives from the face facial expression change node instruction acquired in the first application.
Human facial expression information identification module 32, it circulates according to instruction calls vision module to obtain image information, record
Human facial expression information in image information.
Judge feedback module 33, according to the human facial expression information of record, it judges whether the facial expression of face becomes
Change, the feedback node information when identifying the node obtaining the change of face facial expression.
Wherein, the every two field picture obtained is identified by human facial expression information identification module 32 respectively, determines and records each
Face number included in two field picture and the emotion value corresponding with each face.
When the face number included in described image is more than or equal to one, human facial expression information identification module 32 is successively
Obtain the expression information of each face, and be respectively compared the expression of each face and the human face expression preset, will with compared face
The most like numerical value corresponding to human face expression preset of expression as the emotion value of this face.
Judging that feedback module 33 compares each face emotion value in adjacent two two field pictures successively, when being in a bad mood, value occurs
During change, it is judged that the facial expression of face changes.
The acquisition face emotion of the candid photograph APP that the system of the embodiment of the present invention can respond intelligent robot system becomes
The request of the timing node changed, in time to capturing APP feedback node information, helps to capture APP and realizes capturing and continuous shooting function.
Those skilled in the art should be understood that each module of the above-mentioned present invention or each step can be with general calculating
Device realizes, and they can concentrate on single calculating device, or is distributed in the network that multiple calculating device is formed
On, alternatively, they can realize with calculating the executable program code of device, it is thus possible to be stored in storage
Device is performed by calculating device, or they are fabricated to respectively each integrated circuit modules, or by many in them
Individual module or step are fabricated to single integrated circuit module and realize.So, the present invention be not restricted to any specific hardware and
Software combines.
Although the embodiment that disclosed herein is as above, but described content is only to facilitate understand the present invention and adopt
Embodiment, be not limited to the present invention.Technical staff in any the technical field of the invention, without departing from this
On the premise of spirit and scope disclosed by invention, in form and any amendment and change can be made in details implement,
But the scope of patent protection of the present invention, still must be defined in the range of standard with appending claims.
Claims (10)
1. a method based on the change of intelligent robot visual identity face facial expression, described intelligent robot is installed organic
Device people's operating system, described robot operating system performs:
Receive from the face facial expression change node instruction acquired in the first application;
Circulate according to described instruction calls vision module to obtain image information, the human facial expression information in record image information;
Described human facial expression information according to record judges whether the facial expression of face changes, when identification obtains people's face
Feedback node information during the node of portion's expression shape change.
Method the most according to claim 1, it is characterised in that acquired image information performs to obtain in the following way
Take:
Respectively the every two field picture obtained is identified, determines and records the face number included in each two field picture and with each
The emotion value that face is corresponding.
Method the most according to claim 2, it is characterised in that described in determine and record the face included in each two field picture
Number and the emotion value corresponding with each face, including:
When the face number included in described image is more than or equal to one, obtain the expression information of each face successively;
It is respectively compared the expression of each face and the human face expression preset, by the human face expression institute that preset most like with described face
Corresponding numerical value is as the emotion value of this face.
Method the most according to claim 3, it is characterised in that the described described human facial expression information according to record judges people
Whether the facial expression of face changes, including:
Compare each face emotion value in adjacent two two field pictures successively;
When value of being in a bad mood changes, it is judged that the facial expression of face changes.
5. according to the method described in claim 3 or 4, it is characterised in that similarity system design based on image judges and described people
The human face expression preset that face is most like.
6. a system based on the change of intelligent robot visual identity face facial expression, described intelligent robot is installed organic
Device people's operating system, the system of this identification face facial expression change includes:
Command reception module, it receives from the face facial expression change node instruction acquired in the first application;
Human facial expression information identification module, it circulates according to described instruction calls vision module to obtain image information, records figure
As the human facial expression information in information;
Judge feedback module, according to the described human facial expression information of record, it judges whether the facial expression of face changes,
The feedback node information when identifying the node obtaining the change of face facial expression.
System the most according to claim 6, it is characterised in that described human facial expression information identification module is in the following way
Acquisition image information:
Respectively the every two field picture obtained is identified, determines and records the face number included in each two field picture and with each
The emotion value that face is corresponding.
System the most according to claim 7, it is characterised in that described human facial expression information identification module is according to following steps
Determine and record the face number included in each two field picture and the emotion value corresponding with each face:
When the face number included in described image is more than or equal to one, obtain the expression information of each face successively;
It is respectively compared the expression of each face and the human face expression preset, by the human face expression institute that preset most like with described face
Corresponding numerical value is as the emotion value of this face.
System the most according to claim 8, it is characterised in that described judgement feedback module judges face according to following steps
Facial expression whether change:
Compare each face emotion value in adjacent two two field pictures successively;
When value of being in a bad mood changes, it is judged that the facial expression of face changes.
System the most according to claim 8 or claim 9, it is characterised in that described judgement feedback module similarity based on image
The human face expression preset that multilevel iudge is most like with described face.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610537523.9A CN106203332A (en) | 2016-07-08 | 2016-07-08 | Method and system based on the change of intelligent robot visual identity face facial expression |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610537523.9A CN106203332A (en) | 2016-07-08 | 2016-07-08 | Method and system based on the change of intelligent robot visual identity face facial expression |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106203332A true CN106203332A (en) | 2016-12-07 |
Family
ID=57472738
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610537523.9A Pending CN106203332A (en) | 2016-07-08 | 2016-07-08 | Method and system based on the change of intelligent robot visual identity face facial expression |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106203332A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107025437A (en) * | 2017-03-16 | 2017-08-08 | 南京邮电大学 | Intelligent photographing method and device based on intelligent composition and micro- Expression analysis |
CN108241410A (en) * | 2016-12-27 | 2018-07-03 | 李景峰 | Mood record and managing device, system and method based on voice recognition |
CN108509941A (en) * | 2018-04-20 | 2018-09-07 | 北京京东金融科技控股有限公司 | Emotional information generation method and device |
CN109117055A (en) * | 2018-07-26 | 2019-01-01 | 深圳市商汤科技有限公司 | Intelligent terminal and control method |
CN110610363A (en) * | 2019-08-27 | 2019-12-24 | 腾讯科技(深圳)有限公司 | Method, apparatus and medium for providing service by autonomously controlling identity information |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102100062A (en) * | 2008-07-17 | 2011-06-15 | 日本电气株式会社 | Imaging device, imaging method and program |
CN103971090A (en) * | 2007-09-28 | 2014-08-06 | 富士胶片株式会社 | Image processing apparatus, image capturing apparatus, image processing method and recording medium |
CN104519263A (en) * | 2013-09-27 | 2015-04-15 | 联想(北京)有限公司 | Method for acquiring image and electronic device |
CN105391922A (en) * | 2014-09-02 | 2016-03-09 | 宏达国际电子股份有限公司 | Electronic device and image capture method thereof |
CN105578058A (en) * | 2016-02-03 | 2016-05-11 | 北京光年无限科技有限公司 | Shooting control method and device for intelligent robot and robot |
-
2016
- 2016-07-08 CN CN201610537523.9A patent/CN106203332A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971090A (en) * | 2007-09-28 | 2014-08-06 | 富士胶片株式会社 | Image processing apparatus, image capturing apparatus, image processing method and recording medium |
CN102100062A (en) * | 2008-07-17 | 2011-06-15 | 日本电气株式会社 | Imaging device, imaging method and program |
CN104519263A (en) * | 2013-09-27 | 2015-04-15 | 联想(北京)有限公司 | Method for acquiring image and electronic device |
CN105391922A (en) * | 2014-09-02 | 2016-03-09 | 宏达国际电子股份有限公司 | Electronic device and image capture method thereof |
CN105578058A (en) * | 2016-02-03 | 2016-05-11 | 北京光年无限科技有限公司 | Shooting control method and device for intelligent robot and robot |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108241410A (en) * | 2016-12-27 | 2018-07-03 | 李景峰 | Mood record and managing device, system and method based on voice recognition |
CN107025437A (en) * | 2017-03-16 | 2017-08-08 | 南京邮电大学 | Intelligent photographing method and device based on intelligent composition and micro- Expression analysis |
CN108509941A (en) * | 2018-04-20 | 2018-09-07 | 北京京东金融科技控股有限公司 | Emotional information generation method and device |
CN109117055A (en) * | 2018-07-26 | 2019-01-01 | 深圳市商汤科技有限公司 | Intelligent terminal and control method |
CN110610363A (en) * | 2019-08-27 | 2019-12-24 | 腾讯科技(深圳)有限公司 | Method, apparatus and medium for providing service by autonomously controlling identity information |
CN110610363B (en) * | 2019-08-27 | 2023-06-09 | 腾讯科技(深圳)有限公司 | Method, device and medium for providing service through self-management control identity information |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106203332A (en) | Method and system based on the change of intelligent robot visual identity face facial expression | |
CN109218619A (en) | Image acquiring method, device and system | |
US20080285791A1 (en) | Image processing apparatus and control method for same | |
CN107654406B (en) | Fan air supply control device, fan air supply control method and device | |
CN111597938B (en) | Living body detection and model training method and device | |
US11468683B2 (en) | Population density determination from multi-camera sourced imagery | |
CN105704386A (en) | Image acquisition method, electronic equipment and electronic device | |
US20200380690A1 (en) | Image processing method, apparatus, and storage medium | |
CN104506767A (en) | Method for generating different focal lengths of same scene by using continuous movement of motor and terminal | |
CN105960801A (en) | Enhancing video conferences | |
CN104219444A (en) | Method and device for processing video shooting | |
CN113516778A (en) | Model training data acquisition method and device, computer equipment and storage medium | |
CN113784059B (en) | Video generation and splicing method, equipment and storage medium for clothing production | |
CN115035580A (en) | Figure digital twinning construction method and system | |
CN110400119A (en) | Interview method, apparatus, computer equipment and storage medium based on artificial intelligence | |
US10924637B2 (en) | Playback method, playback device and computer-readable storage medium | |
CN110502117A (en) | Screenshot method and electric terminal in electric terminal | |
JP6862596B1 (en) | How to select video analysis equipment, wide area surveillance system and camera | |
CN109255310A (en) | Animal mood recognition methods, device, terminal and readable storage medium storing program for executing | |
CN110415318B (en) | Image processing method and device | |
WO2023217138A1 (en) | Parameter configuration method and apparatus, device, storage medium and product | |
KR101468347B1 (en) | Method and arrangement for identifying virtual visual information in images | |
CN105467741B (en) | A kind of panorama photographic method and terminal | |
US10282633B2 (en) | Cross-asset media analysis and processing | |
CN102187663A (en) | Image pickup apparatus, image pick-up method and integrated circuit |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161207 |