-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
State of the art on egocentric saliency prediction #2
Comments
Can saliency map models predict human egocentric visual attention? This experiment tries to calculate the best option for predict the human egocentric visual attention. They used like a dataset different images in the same environment: in a room. Four different persons (one at a time) sit on a chair and another different person walking randomly around the first person. The two subjects were looking all the room moving their head freely for one minute. |
An attention-based activity recognition for egocentric video. In this paper, they propose a different method to improve recognition activity. A recent work presented a method of predict key objects by hand manipulation. But this can make problems because not all the objects can be manipulated by hands and no all the objects are being manipulated are important objects for the visual human attention. The dataset of this work consisted on 20 different persons recording egocentric videos in their own homes. |
You should identify and read some (3-5) scientific papers or works where similar to your research.
I think that in the world of egocentric they use a lot the term "attention" as a similar concept to saliency.
I have found some papers that I would like you to look at and write a short summary (one paragraph for each):
Yamada, Kentaro, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, and Kazuo Hiraki. "Can saliency map models predict human egocentric visual attention?." In Computer Vision–ACCV 2010 Workshops, pp. 420-429. Springer Berlin Heidelberg, 2010.
Matsuo, Kenji, Kentaro Yamada, Satoshi Ueno, and Sei Naito. "An attention-based activity recognition for egocentric video." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 551-556. 2014.
Bettadapura, Vinay, Irfan Essa, and Caroline Pantofaru. "Egocentric field-of-view localization using first-person point-of-view devices." In Applications of Computer Vision (WACV), 2015 IEEE Winter Conference on, pp. 626-633. IEEE, 2015.
Fathi, Alireza, Yin Li, and James M. Rehg. "Learning to recognize daily actions using gaze." In Computer Vision–ECCV 2012, pp. 314-327. Springer Berlin Heidelberg, 2012
In particular, I want you to answer this questions:
Answer to this issue with a paragraph every time you finish reading each paper. Make sure you answer the questions I posed.
The text was updated successfully, but these errors were encountered: