Useful links
AICS-logo
Machine learning and vision data sets
- Google: visual databases for machine learning
- The UC Irvine Machine Learning Repository
- The Institute for Digital Research and Education (IDRE) Data Analysis Examples
- The Singapore Maritime Dataset (SMD)
- The Open Science Framework
- Nature.com: A dataset of free-viewing eye-movement recordings
- IEEE Xplore Digital Library: Overview of Eye tracking Datasets
- MIT Saliency Benchmark: Saliency Model performances and datasets
- LSUN'17: SALICON Saliency Prediction Challenge
- Kaggle: The Home of Data Science & Machine Learning
- Wikimedia IRC
- Russian Sentence Corpus (RSC)
- Google data set search
- COCO: large-scale object detection, segmentation, and captioning dataset
- The COCO-Stuff dataset with dense pixel-level annotations
- Michael Dorr's (Harvard University) data sets
Educational Resources
- Open Science Framework: Open cognitive task tutorials
- Go Cognitive: Free materials for students, educators, and researchers in cognitive psychology and cognitive neuroscience
- MILA: Montreal Institute for Learning Algorithms resources
- Neuronal Dynamics: an online-book on neuronal dynamics, with the description of the leaky integrate-and-fire layer and mathematics underlying LIF-algorythms
- 3Blue1Brown: a series of lectures on neural networks
- Michael Freeman: An Introduction to Hierarchical Modeling
Important and/or new materials on attention, vision, and computational models
- Important papers + information on the diffusion model
- Top deep learning papers
- Zhang, R. Isola, P., Efros, A.A., Shechtman, E., Wang, O. (2018). The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. arXiv:1801.03924. Видео-ресурс
- Hooge, I., Holmqvist, K., & Nyström, M. (2016). The pupil is faster than the corneal reflection (CR): Are video based pupil-CR eye trackers suitable for studying detailed dynamics of eye movements? Vision Research, 128, 6–18. doi:10.1016/j.visres.2016.09.002
- Itti, L. & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40(10–12), 1489–1506. doi:10.1016/S0042-6989(99)00163-7
- Fatahi, M., Ahmadi, M., Ahmadi, A., Shahsavari, M., & Devienne, P. (2016). Towards an spiking deep belief network for face recognition application. 6th International Conference on Computer and Knowledge Engineering (ICCKE). doi:10.1109/ICCKE.2016.7802132
Varia
- The Neural Network Zoo: A list of neural network architectures
Useful code
- Code for sampling methods. Some of our analysis will be looking at point comparisons along a continuous distribution. In these cases, we might not be able to use tradition comparison methods. These files use sampling methods, and 'bootstrapping' in particular to solve this problem. Many of our statistics assume a normal distribution, but resampling methods allow you to estimate the true mean and standard deviation (and other parameters) by using a smaller subsection (sample) of your data. It makes no assumption about the shape of the underlying distribution. The original source code was provided by Dr. Amelia Hunt, University of Aberdeen (UK).
bootstrappercent.m
permutationData.mat
permutationExample.m
Have you spotted a typo?
Highlight it, click Ctrl+Enter and send us a message. Thank you for your help!
To be used only for spelling or punctuation mistakes.