Papers

Here are some papers I think are really good and worth knowing about. Don't worry if you find the math too complicated to follow along, just focus on the big ideas that the paper presented. Reading papers is often the best way to come up with ideas for new papers as well as reinforce your understanding on the subject.

List of Papers (in no particular order)

  • Mastering the game of Go with deep neural networks and tree search (2016), D. Silver et al. [pdf]
  • ImageNet classification with deep convolutional neural networks (2012), A. Krizhevsky et al. [pdf]
  • Visualizing and understanding convolutional networks (2014), M. Zeiler and R. Fergus [pdf]
  • CNN features off-the-Shelf: An astounding baseline for recognition (2014), A. Razavian et al. [pdf]
  • Generative adversarial nets (2014), I. Goodfellow et al. [pdf]
  • Deep neural networks are easily fooled: High confidence predictions for unrecognizable images (2015), A. Nguyen et al. [pdf]
  • Dropout: A simple way to prevent neural networks from overfitting (2014), N. Srivastava et al. [pdf]
  • Adam: A method for stochastic optimization (2014), D. Kingma and J. Ba [pdf]
  • Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks (2015), S. Ren et al. [pdf]
  • Long-term recurrent convolutional networks for visual recognition and description (2015), J. Donahue et al. [pdf]
  • Glove: Global vectors for word representation (2014), J. Pennington et al. [pdf]
  • VQA: Visual question answering (2015), S. Antol et al. [pdf]
  • DeepFace: Closing the gap to human-level performance in face verification (2014), Y. Taigman et al. [pdf]
  • Recurrent neural network based language model (2010), T. Mikolov et al. [pdf]
  • A fast learning algorithm for deep belief nets (2006), G. Hinton et al. [pdf]
  • Neural machine translation by jointly learning to align and translate (2014), D. Bahdanau et al. [pdf]
  • Sequence to sequence learning with neural networks (2014), I. Sutskever et al. [pdf]
  • Generating sequences with recurrent neural networks (2013), A. Graves. [pdf]
  • Playing atari with deep reinforcement learning (2013), V. Mnih et al. [pdf])
  • Google's neural machine translation system: Bridging the gap between human and machine translation (2016), Y. Wu et al. [pdf]
  • SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 1MB model size (2016), F. Iandola et al. [pdf]
  • WaveNet: A Generative Model for Raw Audio (2016), A. Oord et al. [pdf] [web]
  • Gradient-based learning applied to document recognition (1998), Y. LeCun et al. [pdf]
  • Long short-term memory (1997), S. Hochreiter and J. Schmidhuber. [pdf]

Extra Credit (1% per paper)

For an oppurtunity to get extra credit in the class you can do the following. Pick a paper that you find interesting (it doesn't have to be in the list) and answer the following questions (min 3 detailed sentences per question):

  1. Title/Authors: What is the title of the paper you read and who are the authors?
  2. Link: Link to the paper you read.
  3. Summary: Summarize the problem the paper attempted to solve, how they solved it (in general terms), and their results.
  4. Contributions: What were some of the key contributions in the paper?
  5. What did you like?: Based on the material presented in the paper, what aspects did you find particularly interesting?
  6. How does this relate?: How does the content covered in the paper relate to what we've been talking about in the class?

EveryIf you do this well, you can earn 1% per paper you write about. For submissions that clearly show a lack of effort or thought, no credit will be given. Please email me a PDF containing your answers to the questions. My email is [email protected].

results matching ""

    No results matching ""