Deep Reinforcement Learning | ÚFAL
Resource history | v2 (current) | updated by jjones
Details
Deep Reinforcement Learning | ÚFAL
see v2 | updated by jjones | Edit resource "Deep Learning | ÚFAL"
- Title
- Deep Reinforcement Learning | ÚFAL
- Type
- Course
- Created
- 2020
- Description
- In recent years, reinforcement learning has been combined with deep neural networks, giving rise to game agents with super-human performance (for example for Go, chess, or 1v1 Dota2, capable of being trained solely by self-play), datacenter cooling algorithms being 50% more efficient than trained human operators, or improved machine translation. The goal of the course is to introduce reinforcement learning employing deep neural networks, focusing both on the theory and on practical implementations. Python programming skills and TensorFlow skills (or any other deep learning framework) are required, to the extent of the NPFL114 course. No previous knowledge of reinforcement learning is necessary.
- Link
- https://ufal.mff.cuni.cz/courses/npfl122/
- Identifier
- NPFL122
Deep Learning | ÚFAL
see v1 | created by jjones | Add resource "Deep Learning | ÚFAL"
- Title
- Deep Learning | ÚFAL
- Type
- Course
- Created
- 2020
- Description
- In recent years, deep neural networks have been used to solve complex machine-learning problems. They have achieved significant state-of-the-art results in many areas. The goal of the course is to introduce deep neural networks, from the basics to the latest advances. The course will focus both on theory as well as on practical aspects (students will implement and train several deep neural networks capable of achieving state-of-the-art results, for example in named entity recognition, dependency parsing, machine translation, image labeling or in playing video games). No previous knowledge of artificial neural networks is required, but basic understanding of machine learning is advisable.
- Link
- http://ufal.mff.cuni.cz/courses/npfl114/
- Identifier
- no value
authors
This resource has no history of related authors.